Sample records for source code visualization

  1. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    NASA Astrophysics Data System (ADS)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  2. LSSGalPy: Interactive Visualization of the Large-scale Environment Around Galaxies

    NASA Astrophysics Data System (ADS)

    Argudo-Fernández, M.; Duarte Puertas, S.; Ruiz, J. E.; Sabater, J.; Verley, S.; Bergond, G.

    2017-05-01

    New tools are needed to handle the growth of data in astrophysics delivered by recent and upcoming surveys. We aim to build open-source, light, flexible, and interactive software designed to visualize extensive three-dimensional (3D) tabular data. Entirely written in the Python language, we have developed interactive tools to browse and visualize the positions of galaxies in the universe and their positions with respect to its large-scale structures (LSS). Motivated by a previous study, we created two codes using Mollweide projection and wedge diagram visualizations, where survey galaxies can be overplotted on the LSS of the universe. These are interactive representations where the visualizations can be controlled by widgets. We have released these open-source codes that have been designed to be easily re-used and customized by the scientific community to fulfill their needs. The codes are adaptable to other kinds of 3D tabular data and are robust enough to handle several millions of objects. .

  3. A Need for a Theory of Visual Literacy.

    ERIC Educational Resources Information Center

    Hortin, John A.

    1982-01-01

    Examines sources available for developing a theory of visual literacy and attempts to clarify the meaning of the term. Suggests that visual thinking, a concept supported by recent research on mental imagery, visualization, and dual coding, ought to be the emphasis for future theory development. (FL)

  4. Constructing graph models for software system development and analysis

    NASA Astrophysics Data System (ADS)

    Pogrebnoy, Andrey V.

    2017-01-01

    We propose a concept for creating the instrumentation for functional and structural decisions rationale during the software system (SS) development. We propose to develop SS simultaneously on two models - functional (FM) and structural (SM). FM is a source code of the SS. Adequate representation of the FM in the form of a graph model (GM) is made automatically and called SM. The problem of creating and visualizing GM is considered from the point of applying it as a uniform platform for the adequate representation of the SS source code. We propose three levels of GM detailing: GM1 - for visual analysis of the source code and for SS version control, GM2 - for resources optimization and analysis of connections between SS components, GM3 - for analysis of the SS functioning in dynamics. The paper includes examples of constructing all levels of GM.

  5. The Visualization Toolkit (VTK): Rewriting the rendering code for modern graphics cards

    NASA Astrophysics Data System (ADS)

    Hanwell, Marcus D.; Martin, Kenneth M.; Chaudhary, Aashish; Avila, Lisa S.

    2015-09-01

    The Visualization Toolkit (VTK) is an open source, permissively licensed, cross-platform toolkit for scientific data processing, visualization, and data analysis. It is over two decades old, originally developed for a very different graphics card architecture. Modern graphics cards feature fully programmable, highly parallelized architectures with large core counts. VTK's rendering code was rewritten to take advantage of modern graphics cards, maintaining most of the toolkit's programming interfaces. This offers the opportunity to compare the performance of old and new rendering code on the same systems/cards. Significant improvements in rendering speeds and memory footprints mean that scientific data can be visualized in greater detail than ever before. The widespread use of VTK means that these improvements will reap significant benefits.

  6. Spread Spectrum Visual Sensor Network Resource Management Using an End-to-End Cross-Layer Design

    DTIC Science & Technology

    2011-02-01

    Coding In this work, we use rate compatible punctured convolutional (RCPC) codes for channel coding [11]. Using RCPC codes al- lows us to utilize Viterbi’s...11] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE Trans. Commun., vol. 36, no. 4, pp. 389...source coding rate , a channel coding rate , and a power level to all nodes in the

  7. Connectivity Reveals Sources of Predictive Coding Signals in Early Visual Cortex During Processing of Visual Optic Flow.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2017-05-01

    Superimposed on the visual feed-forward pathway, feedback connections convey higher level information to cortical areas lower in the hierarchy. A prominent framework for these connections is the theory of predictive coding where high-level areas send stimulus interpretations to lower level areas that compare them with sensory input. Along these lines, a growing body of neuroimaging studies shows that predictable stimuli lead to reduced blood oxygen level-dependent (BOLD) responses compared with matched nonpredictable counterparts, especially in early visual cortex (EVC) including areas V1-V3. The sources of these modulatory feedback signals are largely unknown. Here, we re-examined the robust finding of relative BOLD suppression in EVC evident during processing of coherent compared with random motion. Using functional connectivity analysis, we show an optic flow-dependent increase of functional connectivity between BOLD suppressed EVC and a network of visual motion areas including MST, V3A, V6, the cingulate sulcus visual area (CSv), and precuneus (Pc). Connectivity decreased between EVC and 2 areas known to encode heading direction: entorhinal cortex (EC) and retrosplenial cortex (RSC). Our results provide first evidence that BOLD suppression in EVC for predictable stimuli is indeed mediated by specific high-level areas, in accord with the theory of predictive coding. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less

  9. OHD/HL - SHEF: code

    Science.gov Websites

    specification How to install the software How to use the software Download the source code (using .gz). Standard Exchange Format (SHEF) is a documented set of rules for coding of data in a form for both visual and information to describe the data. Current SHEF specification How to install the software How to use the

  10. Flowgen: Flowchart-based documentation for C + + codes

    NASA Astrophysics Data System (ADS)

    Kosower, David A.; Lopez-Villarejo, J. J.

    2015-11-01

    We present the Flowgen tool, which generates flowcharts from annotated C + + source code. The tool generates a set of interconnected high-level UML activity diagrams, one for each function or method in the C + + sources. It provides a simple and visual overview of complex implementations of numerical algorithms. Flowgen is complementary to the widely-used Doxygen documentation tool. The ultimate aim is to render complex C + + computer codes accessible, and to enhance collaboration between programmers and algorithm or science specialists. We describe the tool and a proof-of-concept application to the VINCIA plug-in for simulating collisions at CERN's Large Hadron Collider.

  11. Depictions of substance use in reality television: a content analysis of The Osbournes.

    PubMed

    Blair, Nicole A; Yue, So Kuen; Singh, Ranbir; Bernhardt, Jay M

    2005-12-24

    To determine the source and slant of messages in a reality television programme that may promote or inhibit health related or risky behaviours. Coding visual and verbal references to alcohol, tobacco, and other drug (ATOD) use in The Osbournes. Three reviewers watched all 10 episodes of the first season and coded incidents of substance use according to the substance used (alcohol, tobacco, or drugs), the way use was portrayed (visually or verbally), the source of the message (the character in the show involved in the incident), and the slant of the incident (endorsement or rejection). The variation in number of messages in an average episode, the slant of messages, and message source. The average number of messages per episode was 9.1 (range 2-17). Most drug use messages (15, 54%) implied rejection of drugs, but most alcohol messages (30, 64%) and tobacco messages (12, 75%) implied endorsements for using these substances. Most rejections (34, 94%) were conveyed verbally, but most endorsements (36, 65%) were conveyed visually. Messages varied in frequency and slant by source. The reality television show analysed in this study contains numerous messages on substance use that imply both rejection and endorsement of use. The juxtaposition of verbal rejection messages and visual endorsement messages, and the depiction of contradictory messages about substance use from show characters, may send mixed messages to viewers about substance use.

  12. Python-Based Applications for Hydrogeological Modeling

    NASA Astrophysics Data System (ADS)

    Khambhammettu, P.

    2013-12-01

    Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The python wrapper invokes the underlying FORTRAN layer to compute transient groundwater elevations and processes this information to create time-series and 2D plots.

  13. GenomeDiagram: a python package for the visualization of large-scale genomic data.

    PubMed

    Pritchard, Leighton; White, Jennifer A; Birch, Paul R J; Toth, Ian K

    2006-03-01

    We present GenomeDiagram, a flexible, open-source Python module for the visualization of large-scale genomic, comparative genomic and other data with reference to a single chromosome or other biological sequence. GenomeDiagram may be used to generate publication-quality vector graphics, rastered images and in-line streamed graphics for webpages. The package integrates with datatypes from the BioPython project, and is available for Windows, Linux and Mac OS X systems. GenomeDiagram is freely available as source code (under GNU Public License) at http://bioinf.scri.ac.uk/lp/programs.html, and requires Python 2.3 or higher, and recent versions of the ReportLab and BioPython packages. A user manual, example code and images are available at http://bioinf.scri.ac.uk/lp/programs.html.

  14. Depictions of substance use in reality television: a content analysis of The Osbournes

    PubMed Central

    Blair, Nicole A; Yue, So Kuen; Singh, Ranbir; Bernhardt, Jay M

    2005-01-01

    Objective To determine the source and slant of messages in a reality television programme that may promote or inhibit health related or risky behaviours. Design Coding visual and verbal references to alcohol, tobacco, and other drug (ATOD) use in The Osbournes. Review methods Three reviewers watched all 10 episodes of the first season and coded incidents of substance use according to the substance used (alcohol, tobacco, or drugs), the way use was portrayed (visually or verbally), the source of the message (the character in the show involved in the incident), and the slant of the incident (endorsement or rejection). Main outcome measures The variation in number of messages in an average episode, the slant of messages, and message source. Results The average number of messages per episode was 9.1 (range 2-17). Most drug use messages (15, 54%) implied rejection of drugs, but most alcohol messages (30, 64%) and tobacco messages (12, 75%) implied endorsements for using these substances. Most rejections (34, 94%) were conveyed verbally, but most endorsements (36, 65%) were conveyed visually. Messages varied in frequency and slant by source. Conclusions The reality television show analysed in this study contains numerous messages on substance use that imply both rejection and endorsement of use. The juxtaposition of verbal rejection messages and visual endorsement messages, and the depiction of contradictory messages about substance use from show characters, may send mixed messages to viewers about substance use. PMID:16373737

  15. SPV: a JavaScript Signaling Pathway Visualizer.

    PubMed

    Calderone, Alberto; Cesareni, Gianni

    2018-03-24

    The visualization of molecular interactions annotated in web resources is useful to offer to users such information in a clear intuitive layout. These interactions are frequently represented as binary interactions that are laid out in free space where, different entities, cellular compartments and interaction types are hardly distinguishable. SPV (Signaling Pathway Visualizer) is a free open source JavaScript library which offers a series of pre-defined elements, compartments and interaction types meant to facilitate the representation of signaling pathways consisting of causal interactions without neglecting simple protein-protein interaction networks. freely available under Apache version 2 license; Source code: https://github.com/Sinnefa/SPV_Signaling_Pathway_Visualizer_v1.0. Language: JavaScript; Web technology: Scalable Vector Graphics; Libraries: D3.js. sinnefa@gmail.com.

  16. Unipro UGENE: a unified bioinformatics toolkit.

    PubMed

    Okonechnikov, Konstantin; Golosova, Olga; Fursov, Mikhail

    2012-04-15

    Unipro UGENE is a multiplatform open-source software with the main goal of assisting molecular biologists without much expertise in bioinformatics to manage, analyze and visualize their data. UGENE integrates widely used bioinformatics tools within a common user interface. The toolkit supports multiple biological data formats and allows the retrieval of data from remote data sources. It provides visualization modules for biological objects such as annotated genome sequences, Next Generation Sequencing (NGS) assembly data, multiple sequence alignments, phylogenetic trees and 3D structures. Most of the integrated algorithms are tuned for maximum performance by the usage of multithreading and special processor instructions. UGENE includes a visual environment for creating reusable workflows that can be launched on local resources or in a High Performance Computing (HPC) environment. UGENE is written in C++ using the Qt framework. The built-in plugin system and structured UGENE API make it possible to extend the toolkit with new functionality. UGENE binaries are freely available for MS Windows, Linux and Mac OS X at http://ugene.unipro.ru/download.html. UGENE code is licensed under the GPLv2; the information about the code licensing and copyright of integrated tools can be found in the LICENSE.3rd_party file provided with the source bundle.

  17. Kalai-Smorodinsky bargaining solution for optimal resource allocation over wireless DS-CDMA visual sensor networks

    NASA Astrophysics Data System (ADS)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2012-01-01

    Surveillance applications usually require high levels of video quality, resulting in high power consumption. The existence of a well-behaved scheme to balance video quality and power consumption is crucial for the system's performance. In the present work, we adopt the game-theoretic approach of Kalai-Smorodinsky Bargaining Solution (KSBS) to deal with the problem of optimal resource allocation in a multi-node wireless visual sensor network (VSN). In our setting, the Direct Sequence Code Division Multiple Access (DS-CDMA) method is used for channel access, while a cross-layer optimization design, which employs a central processing server, accounts for the overall system efficacy through all network layers. The task assigned to the central server is the communication with the nodes and the joint determination of their transmission parameters. The KSBS is applied to non-convex utility spaces, efficiently distributing the source coding rate, channel coding rate and transmission powers among the nodes. In the underlying model, the transmission powers assume continuous values, whereas the source and channel coding rates can take only discrete values. Experimental results are reported and discussed to demonstrate the merits of KSBS over competing policies.

  18. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    NASA Astrophysics Data System (ADS)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  19. Critical evaluation of reverse engineering tool Imagix 4D!

    PubMed

    Yadav, Rashmi; Patel, Ravindra; Kothari, Abhay

    2016-01-01

    The comprehension of legacy codes is difficult to understand. Various commercial reengineering tools are available that have unique working styles, and are equipped with their inherent capabilities and shortcomings. The focus of the available tools is in visualizing static behavior not the dynamic one. Therefore, it is difficult for people who work in software product maintenance, code understanding reengineering/reverse engineering. Consequently, the need for a comprehensive reengineering/reverse engineering tool arises. We found the usage of Imagix 4D to be good as it generates the maximum pictorial representations in the form of flow charts, flow graphs, class diagrams, metrics and, to a partial extent, dynamic visualizations. We evaluated Imagix 4D with the help of a case study involving a few samples of source code. The behavior of the tool was analyzed on multiple small codes and a large code gcc C parser. Large code evaluation was performed to uncover dead code, unstructured code, and the effect of not including required files at preprocessing level. The utility of Imagix 4D to prepare decision density and complexity metrics for a large code was found to be useful in getting to know how much reengineering is required. At the outset, Imagix 4D offered limitations in dynamic visualizations, flow chart separation (large code) and parsing loops. The outcome of evaluation will eventually help in upgrading Imagix 4D and posed a need of full featured tools in the area of software reengineering/reverse engineering. It will also help the research community, especially those who are interested in the realm of software reengineering tool building.

  20. GRASP/Ada 95: Reverse Engineering Tools for Ada

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1996-01-01

    The GRASP/Ada project (Graphical Representations of Algorithms, Structures, and Processes for Ada) has successfully created and prototyped an algorithmic level graphical representation for Ada software, the Control Structure Diagram (CSD), and a new visualization for a fine-grained complexity metric called the Complexity Profile Graph (CPG). By synchronizing the CSD and the CPG, the CSD view of control structure, nesting, and source code is directly linked to the corresponding visualization of statement level complexity in the CPG. GRASP has been integrated with GNAT, the GNU Ada 95 Translator to provide a comprehensive graphical user interface and development environment for Ada 95. The user may view, edit, print, and compile source code as a CSD with no discernible addition to storage or computational overhead. The primary impetus for creation of the CSD was to improve the comprehension efficiency of Ada software and, as a result, improve reliability and reduce costs. The emphasis has been on the automatic generation of the CSD from Ada 95 source code to support reverse engineering and maintenance. The CSD has the potential to replace traditional prettyprinted Ada source code. The current update has focused on the design and implementation of a new Motif compliant user interface, and a new CSD generator consisting of a tagger and renderer. The Complexity Profile Graph (CPG) is based on a set of functions that describes the context, content, and the scaling for complexity on a statement by statement basis. When combined graphicafly, the result is a composite profile of complexity for the program unit. Ongoing research includes the development and refinement of the associated functions, and the development of the CPG generator prototype. The current Version 5.0 prototype provides the capability for the user to generate CSDs and CPGs from Ada 95 source code in a reverse engineering as well as forward engineering mode with a level of flexibility suitable for practical application. This report provides an overview of the GRASP/Ada project with an emphasis on the current update.

  1. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  2. DataPflex: a MATLAB-based tool for the manipulation and visualization of multidimensional datasets.

    PubMed

    Hendriks, Bart S; Espelin, Christopher W

    2010-02-01

    DataPflex is a MATLAB-based application that facilitates the manipulation and visualization of multidimensional datasets. The strength of DataPflex lies in the intuitive graphical user interface for the efficient incorporation, manipulation and visualization of high-dimensional data that can be generated by multiplexed protein measurement platforms including, but not limited to Luminex or Meso-Scale Discovery. Such data can generally be represented in the form of multidimensional datasets [for example (time x stimulation x inhibitor x inhibitor concentration x cell type x measurement)]. For cases where measurements are made in a combinational fashion across multiple dimensions, there is a need for a tool to efficiently manipulate and reorganize such data for visualization. DataPflex accepts data consisting of up to five arbitrary dimensions in addition to a measurement dimension. Data are imported from a simple .xls format and can be exported to MATLAB or .xls. Data dimensions can be reordered, subdivided, merged, normalized and visualized in the form of collections of line graphs, bar graphs, surface plots, heatmaps, IC50's and other custom plots. Open source implementation in MATLAB enables easy extension for custom plotting routines and integration with more sophisticated analysis tools. DataPflex is distributed under the GPL license (http://www.gnu.org/licenses/) together with documentation, source code and sample data files at: http://code.google.com/p/datapflex. Supplementary data available at Bioinformatics online.

  3. Content-based multiple bitstream image transmission over noisy channels.

    PubMed

    Cao, Lei; Chen, Chang Wen

    2002-01-01

    In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.

  4. ProtVista: visualization of protein sequence annotations.

    PubMed

    Watkins, Xavier; Garcia, Leyla J; Pundir, Sangya; Martin, Maria J

    2017-07-01

    ProtVista is a comprehensive visualization tool for the graphical representation of protein sequence features in the UniProt Knowledgebase, experimental proteomics and variation public datasets. The complexity and relationships in this wealth of data pose a challenge in interpretation. Integrative visualization approaches such as provided by ProtVista are thus essential for researchers to understand the data and, for instance, discover patterns affecting function and disease associations. ProtVista is a JavaScript component released as an open source project under the Apache 2 License. Documentation and source code are available at http://ebi-uniprot.github.io/ProtVista/ . martin@ebi.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  5. Kablammo: an interactive, web-based BLAST results visualizer.

    PubMed

    Wintersinger, Jeff A; Wasmuth, James D

    2015-04-15

    Kablammo is a web-based application that produces interactive, vector-based visualizations of sequence alignments generated by BLAST. These visualizations can illustrate many features, including shared protein domains, chromosome structural modifications and genome misassembly. Kablammo can be used at http://kablammo.wasmuthlab.org. For a local installation, the source code and instructions are available under the MIT license at http://github.com/jwintersinger/kablammo. jeff@wintersinger.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. The Simple Video Coder: A free tool for efficiently coding social video data.

    PubMed

    Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C

    2017-08-01

    Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.

  7. Cloud based, Open Source Software Application for Mitigating Herbicide Drift

    NASA Astrophysics Data System (ADS)

    Saraswat, D.; Scott, B.

    2014-12-01

    The spread of herbicide resistant weeds has resulted in the need for clearly marked fields. In response to this need, the University of Arkansas Cooperative Extension Service launched a program named Flag the Technology in 2011. This program uses color-coded flags as a visual alert of the herbicide trait technology within a farm field. The flag based program also serves to help avoid herbicide misapplication and prevent herbicide drift damage between fields with differing crop technologies. This program has been endorsed by Southern Weed Science Society of America and is attracting interest from across the USA, Canada, and Australia. However, flags have risk of misplacement or disappearance due to mischief or severe windstorms/thunderstorms, respectively. This presentation will discuss the design and development of a cloud-based, free application utilizing open-source technologies, called Flag the Technology Cloud (FTTCloud), for allowing agricultural stakeholders to color code their farm fields for indicating herbicide resistant technologies. The developed software utilizes modern web development practices, widely used design technologies, and basic geographic information system (GIS) based interactive interfaces for representing, color-coding, searching, and visualizing fields. This program has also been made compatible for a wider usability on different size devices- smartphones, tablets, desktops and laptops.

  8. Architectural Visualization of C/C++ Source Code for Program Comprehension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panas, T; Epperly, T W; Quinlan, D

    2006-09-01

    Structural and behavioral visualization of large-scale legacy systems to aid program comprehension is still a major challenge. The challenge is even greater when applications are implemented in flexible and expressive languages such as C and C++. In this paper, we consider visualization of static and dynamic aspects of large-scale scientific C/C++ applications. For our investigation, we reuse and integrate specialized analysis and visualization tools. Furthermore, we present a novel layout algorithm that permits a compressive architectural view of a large-scale software system. Our layout is unique in that it allows traditional program visualizations, i.e., graph structures, to be seen inmore » relation to the application's file structure.« less

  9. MPEG-compliant joint source/channel coding using discrete cosine transform and substream scheduling for visual communication over packet networks

    NASA Astrophysics Data System (ADS)

    Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.

    2001-01-01

    Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.

  10. Decon2LS: An open-source software package for automated processing and visualization of high resolution mass spectrometry data

    PubMed Central

    Jaitly, Navdeep; Mayampurath, Anoop; Littlefield, Kyle; Adkins, Joshua N; Anderson, Gordon A; Smith, Richard D

    2009-01-01

    Background Data generated from liquid chromatography coupled to high-resolution mass spectrometry (LC-MS)-based studies of a biological sample can contain large amounts of biologically significant information in the form of proteins, peptides, and metabolites. Interpreting this data involves inferring the masses and abundances of biomolecules injected into the instrument. Because of the inherent complexity of mass spectral patterns produced by these biomolecules, the analysis is significantly enhanced by using visualization capabilities to inspect and confirm results. In this paper we describe Decon2LS, an open-source software package for automated processing and visualization of high-resolution MS data. Drawing extensively on algorithms developed over the last ten years for ICR2LS, Decon2LS packages the algorithms as a rich set of modular, reusable processing classes for performing diverse functions such as reading raw data, routine peak finding, theoretical isotope distribution modelling, and deisotoping. Because the source code is openly available, these functionalities can now be used to build derivative applications in relatively fast manner. In addition, Decon2LS provides an extensive set of visualization tools, such as high performance chart controls. Results With a variety of options that include peak processing, deisotoping, isotope composition, etc, Decon2LS supports processing of multiple raw data formats. Deisotoping can be performed on an individual scan, an individual dataset, or on multiple datasets using batch processing. Other processing options include creating a two dimensional view of mass and liquid chromatography (LC) elution time features, generating spectrum files for tandem MS data, creating total intensity chromatograms, and visualizing theoretical peptide profiles. Application of Decon2LS to deisotope different datasets obtained across different instruments yielded a high number of features that can be used to identify and quantify peptides in the biological sample. Conclusion Decon2LS is an efficient software package for discovering and visualizing features in proteomics studies that require automated interpretation of mass spectra. Besides being easy to use, fast, and reliable, Decon2LS is also open-source, which allows developers in the proteomics and bioinformatics communities to reuse and refine the algorithms to meet individual needs. Decon2LS source code, installer, and tutorials may be downloaded free of charge at . PMID:19292916

  11. Decon2LS: An open-source software package for automated processing and visualization of high resolution mass spectrometry data.

    PubMed

    Jaitly, Navdeep; Mayampurath, Anoop; Littlefield, Kyle; Adkins, Joshua N; Anderson, Gordon A; Smith, Richard D

    2009-03-17

    Data generated from liquid chromatography coupled to high-resolution mass spectrometry (LC-MS)-based studies of a biological sample can contain large amounts of biologically significant information in the form of proteins, peptides, and metabolites. Interpreting this data involves inferring the masses and abundances of biomolecules injected into the instrument. Because of the inherent complexity of mass spectral patterns produced by these biomolecules, the analysis is significantly enhanced by using visualization capabilities to inspect and confirm results. In this paper we describe Decon2LS, an open-source software package for automated processing and visualization of high-resolution MS data. Drawing extensively on algorithms developed over the last ten years for ICR2LS, Decon2LS packages the algorithms as a rich set of modular, reusable processing classes for performing diverse functions such as reading raw data, routine peak finding, theoretical isotope distribution modelling, and deisotoping. Because the source code is openly available, these functionalities can now be used to build derivative applications in relatively fast manner. In addition, Decon2LS provides an extensive set of visualization tools, such as high performance chart controls. With a variety of options that include peak processing, deisotoping, isotope composition, etc, Decon2LS supports processing of multiple raw data formats. Deisotoping can be performed on an individual scan, an individual dataset, or on multiple datasets using batch processing. Other processing options include creating a two dimensional view of mass and liquid chromatography (LC) elution time features, generating spectrum files for tandem MS data, creating total intensity chromatograms, and visualizing theoretical peptide profiles. Application of Decon2LS to deisotope different datasets obtained across different instruments yielded a high number of features that can be used to identify and quantify peptides in the biological sample. Decon2LS is an efficient software package for discovering and visualizing features in proteomics studies that require automated interpretation of mass spectra. Besides being easy to use, fast, and reliable, Decon2LS is also open-source, which allows developers in the proteomics and bioinformatics communities to reuse and refine the algorithms to meet individual needs.Decon2LS source code, installer, and tutorials may be downloaded free of charge at http://http:/ncrr.pnl.gov/software/.

  12. Color coding of control room displays: the psychocartography of visual layering effects.

    PubMed

    Van Laar, Darren; Deshe, Ofer

    2007-06-01

    To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).

  13. Exclusively visual analysis of classroom group interactions

    NASA Astrophysics Data System (ADS)

    Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric

    2016-12-01

    Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data only—without audio—as when using both visual and audio data to code. Also, interrater reliability is high when comparing use of visual and audio data to visual-only data. We see a small bias to code interactions as group discussion when visual and audio data are used compared with video-only data. This work establishes that meaningful educational observation can be made through visual information alone. Further, it suggests that after initial work to create a coding scheme and validate it in each environment, computer-automated visual coding could drastically increase the breadth of qualitative studies and allow for meaningful educational analysis on a far greater scale.

  14. AirShow 1.0 CFD Software Users' Guide

    NASA Technical Reports Server (NTRS)

    Mohler, Stanley R., Jr.

    2005-01-01

    AirShow is visualization post-processing software for Computational Fluid Dynamics (CFD). Upon reading binary PLOT3D grid and solution files into AirShow, the engineer can quickly see how hundreds of complex 3-D structured blocks are arranged and numbered. Additionally, chosen grid planes can be displayed and colored according to various aerodynamic flow quantities such as Mach number and pressure. The user may interactively rotate and translate the graphical objects using the mouse. The software source code was written in cross-platform Java, C++, and OpenGL, and runs on Unix, Linux, and Windows. The graphical user interface (GUI) was written using Java Swing. Java also provides multiple synchronized threads. The Java Native Interface (JNI) provides a bridge between the Java code and the C++ code where the PLOT3D files are read, the OpenGL graphics are rendered, and numerical calculations are performed. AirShow is easy to learn and simple to use. The source code is available for free from the NASA Technology Transfer and Partnership Office.

  15. [Hybrid 3-D rendering of the thorax and surface-based virtual bronchoscopy in surgical and interventional therapy control].

    PubMed

    Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D

    2001-07-01

    The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.

  16. Comparison of memory thresholds for planar qudit geometries

    NASA Astrophysics Data System (ADS)

    Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad

    2017-11-01

    We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.

  17. Hybrid rendering of the chest and virtual bronchoscopy [corrected].

    PubMed

    Seemann, M D; Seemann, O; Luboldt, W; Gebicke, K; Prime, G; Claussen, C D

    2000-10-30

    Thin-section spiral computed tomography was used to acquire the volume data sets of the thorax. The tracheobronchial system and pathological changes of the chest were visualized using a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures, thus producing a hybrid rendering. The hybrid rendering technique exploit the advantages of both rendering methods and enable virtual bronchoscopic examinations using different representation models. Virtual bronchoscopic examinations with a transparent color-coded shaded-surface model enables the simultaneous visualization of both the airways and the adjacent structures behind of the tracheobronchial wall and therefore, offers a practical alternative to fiberoptic bronchoscopy. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images.

  18. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  19. Visualization Co-Processing of a CFD Simulation

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    1999-01-01

    OVERFLOW, a widely used CFD simulation code, is combined with a visualization system, pV3, to experiment with an environment for simulation/visualization co-processing on a SGI Origin 2000 computer(O2K) system. The shared memory version of the solver is used with the O2K 'pfa' preprocessor invoked to automatically discover parallelism in the source code. No other explicit parallelism is enabled. In order to study the scaling and performance of the visualization co-processing system, sample runs are made with different processor groups in the range of 1 to 254 processors. The data exchange between the visualization system and the simulation system is rapid enough for user interactivity when the problem size is small. This shared memory version of OVERFLOW, with minimal parallelization, does not scale well to an increasing number of available processors. The visualization task takes about 18 to 30% of the total processing time and does not appear to be a major contributor to the poor scaling. Improper load balancing and inter-processor communication overhead are contributors to this poor performance. Work is in progress which is aimed at obtaining improved parallel performance of the solver and removing the limitations of serial data transfer to pV3 by examining various parallelization/communication strategies, including the use of the explicit message passing.

  20. GISMO: A MATLAB toolbox for seismic research, monitoring, & education

    NASA Astrophysics Data System (ADS)

    Thompson, G.; Reyes, C. G.; Kempler, L. A.

    2017-12-01

    GISMO is an open-source MATLAB toolbox which provides an object-oriented framework to build workflows and applications that read, process, visualize and write seismic waveform, catalog and instrument response data. GISMO can retrieve data from a variety of sources (e.g. FDSN web services, Earthworm/Winston servers) and data formats (SAC, Seisan, etc.). It can handle waveform data that crosses file boundaries. All this alleviates one of the most time consuming part for scientists developing their own codes. GISMO simplifies seismic data analysis by providing a common interface for your data, regardless of its source. Several common plots are built-in to GISMO, such as record section plots, spectrograms, depth-time sections, event count per unit time, energy release per unit time, etc. Other visualizations include map views and cross-sections of hypocentral data. Several common processing methods are also included, such as an extensive set of tools for correlation analysis. Support is being added to interface GISMO with ObsPy. GISMO encourages community development of an integrated set of codes and accompanying documentation, eliminating the need for seismologists to "reinvent the wheel". By sharing code the consistency and repeatability of results can be enhanced. GISMO is hosted on GitHub with documentation both within the source code and in the project wiki. GISMO has been used at the University of South Florida and University of Alaska Fairbanks in graduate-level courses including Seismic Data Analysis, Time Series Analysis and Computational Seismology. GISMO has also been tailored to interface with the common seismic monitoring software and data formats used by volcano observatories in the US and elsewhere. As an example, toolbox training was delivered to researchers at INETER (Nicaragua). Applications built on GISMO include IceWeb (e.g. web-based spectrograms), which has been used by Alaska Volcano Observatory since 1998 and became the prototype for the USGS Pensive system.

  1. PlasmaPy: beginning a community developed Python package for plasma physics

    NASA Astrophysics Data System (ADS)

    Murphy, Nicholas A.; Huang, Yi-Min; PlasmaPy Collaboration

    2016-10-01

    In recent years, researchers in several disciplines have collaborated on community-developed open source Python packages such as Astropy, SunPy, and SpacePy. These packages provide core functionality, common frameworks for data analysis and visualization, and educational tools. We propose that our community begins the development of PlasmaPy: a new open source core Python package for plasma physics. PlasmaPy could include commonly used functions in plasma physics, easy-to-use plasma simulation codes, Grad-Shafranov solvers, eigenmode solvers, and tools to analyze both simulations and experiments. The development will include modern programming practices such as version control, embedding documentation in the code, unit tests, and avoiding premature optimization. We will describe early code development on PlasmaPy, and discuss plans moving forward. The success of PlasmaPy depends on active community involvement and a welcoming and inclusive environment, so anyone interested in joining this collaboration should contact the authors.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keefer, Donald A.; Shaffer, Eric G.; Storsved, Brynne

    A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing jointmore » visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less

  3. Genoviz Software Development Kit: Java tool kit for building genomics visualization applications.

    PubMed

    Helt, Gregg A; Nicol, John W; Erwin, Ed; Blossom, Eric; Blanchard, Steven G; Chervitz, Stephen A; Harmon, Cyrus; Loraine, Ann E

    2009-08-25

    Visualization software can expose previously undiscovered patterns in genomic data and advance biological science. The Genoviz Software Development Kit (SDK) is an open source, Java-based framework designed for rapid assembly of visualization software applications for genomics. The Genoviz SDK framework provides a mechanism for incorporating adaptive, dynamic zooming into applications, a desirable feature of genome viewers. Visualization capabilities of the Genoviz SDK include automated layout of features along genetic or genomic axes; support for user interactions with graphical elements (Glyphs) in a map; a variety of Glyph sub-classes that promote experimentation with new ways of representing data in graphical formats; and support for adaptive, semantic zooming, whereby objects change their appearance depending on zoom level and zooming rate adapts to the current scale. Freely available demonstration and production quality applications, including the Integrated Genome Browser, illustrate Genoviz SDK capabilities. Separation between graphics components and genomic data models makes it easy for developers to add visualization capability to pre-existing applications or build new applications using third-party data models. Source code, documentation, sample applications, and tutorials are available at http://genoviz.sourceforge.net/.

  4. Web3DMol: interactive protein structure visualization based on WebGL.

    PubMed

    Shi, Maoxiang; Gao, Juntao; Zhang, Michael Q

    2017-07-03

    A growing number of web-based databases and tools for protein research are being developed. There is now a widespread need for visualization tools to present the three-dimensional (3D) structure of proteins in web browsers. Here, we introduce our 3D modeling program-Web3DMol-a web application focusing on protein structure visualization in modern web browsers. Users submit a PDB identification code or select a PDB archive from their local disk, and Web3DMol will display and allow interactive manipulation of the 3D structure. Featured functions, such as sequence plot, fragment segmentation, measure tool and meta-information display, are offered for users to gain a better understanding of protein structure. Easy-to-use APIs are available for developers to reuse and extend Web3DMol. Web3DMol can be freely accessed at http://web3dmol.duapp.com/, and the source code is distributed under the MIT license. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. The Role of Higher Level Adaptive Coding Mechanisms in the Development of Face Recognition

    ERIC Educational Resources Information Center

    Pimperton, Hannah; Pellicano, Elizabeth; Jeffery, Linda; Rhodes, Gillian

    2009-01-01

    DevDevelopmental improvements in face identity recognition ability are widely documented, but the source of children's immaturity in face recognition remains unclear. Differences in the way in which children and adults visually represent faces might underlie immaturities in face recognition. Recent evidence of a face identity aftereffect (FIAE),…

  6. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 3 software suite can be compiled for Microsoft Windows®4 and Linux®5 operating systems; the source code is available in a Microsoft Visual Studio®6 2013 solution; Linux Makefiles are also provided. PEST++ Version 3 continues to build a foundation for an open-source framework capable of producing robust and efficient parameter estimation tools for large environmental models.

  7. GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Rosotti, G. P.; Booth, R. A.

    2018-01-01

    GANDALF is a new hydrodynamics and N-body dynamics code designed for investigating planet formation, star formation and star cluster problems. GANDALF is written in C++, parallelized with both OPENMP and MPI and contains a PYTHON library for analysis and visualization. The code has been written with a fully object-oriented approach to easily allow user-defined implementations of physics modules or other algorithms. The code currently contains implementations of smoothed particle hydrodynamics, meshless finite-volume and collisional N-body schemes, but can easily be adapted to include additional particle schemes. We present in this paper the details of its implementation, results from the test suite, serial and parallel performance results and discuss the planned future development. The code is freely available as an open source project on the code-hosting website github at https://github.com/gandalfcode/gandalf and is available under the GPLv2 license.

  8. Coding visual features extracted from video sequences.

    PubMed

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  9. Neural representation of objects in space: a dual coding account.

    PubMed Central

    Humphreys, G W

    1998-01-01

    I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification. PMID:9770227

  10. MSAViewer: interactive JavaScript visualization of multiple sequence alignments.

    PubMed

    Yachdav, Guy; Wilzbach, Sebastian; Rauscher, Benedikt; Sheridan, Robert; Sillitoe, Ian; Procter, James; Lewis, Suzanna E; Rost, Burkhard; Goldberg, Tatyana

    2016-11-15

    The MSAViewer is a quick and easy visualization and analysis JavaScript component for Multiple Sequence Alignment data of any size. Core features include interactive navigation through the alignment, application of popular color schemes, sorting, selecting and filtering. The MSAViewer is 'web ready': written entirely in JavaScript, compatible with modern web browsers and does not require any specialized software. The MSAViewer is part of the BioJS collection of components. The MSAViewer is released as open source software under the Boost Software License 1.0. Documentation, source code and the viewer are available at http://msa.biojs.net/Supplementary information: Supplementary data are available at Bioinformatics online. msa@bio.sh. © The Author 2016. Published by Oxford University Press.

  11. MSAViewer: interactive JavaScript visualization of multiple sequence alignments

    PubMed Central

    Yachdav, Guy; Wilzbach, Sebastian; Rauscher, Benedikt; Sheridan, Robert; Sillitoe, Ian; Procter, James; Lewis, Suzanna E.; Rost, Burkhard; Goldberg, Tatyana

    2016-01-01

    Summary: The MSAViewer is a quick and easy visualization and analysis JavaScript component for Multiple Sequence Alignment data of any size. Core features include interactive navigation through the alignment, application of popular color schemes, sorting, selecting and filtering. The MSAViewer is ‘web ready’: written entirely in JavaScript, compatible with modern web browsers and does not require any specialized software. The MSAViewer is part of the BioJS collection of components. Availability and Implementation: The MSAViewer is released as open source software under the Boost Software License 1.0. Documentation, source code and the viewer are available at http://msa.biojs.net/. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: msa@bio.sh PMID:27412096

  12. SPOCS: software for predicting and visualizing orthology/paralogy relationships among genomes.

    PubMed

    Curtis, Darren S; Phillips, Aaron R; Callister, Stephen J; Conlan, Sean; McCue, Lee Ann

    2013-10-15

    At the rate that prokaryotic genomes can now be generated, comparative genomics studies require a flexible method for quickly and accurately predicting orthologs among the rapidly changing set of genomes available. SPOCS implements a graph-based ortholog prediction method to generate a simple tab-delimited table of orthologs and in addition, html files that provide a visualization of the predicted ortholog/paralog relationships to which gene/protein expression metadata may be overlaid. A SPOCS web application is freely available at http://cbb.pnnl.gov/portal/tools/spocs.html. Source code for Linux systems is also freely available under an open source license at http://cbb.pnnl.gov/portal/software/spocs.html; the Boost C++ libraries and BLAST are required.

  13. canvasDesigner: A versatile interactive high-resolution scientific multi-panel visualization toolkit.

    PubMed

    Zhang, Baohong; Zhao, Shanrong; Neuhaus, Isaac

    2018-05-03

    We present a bioinformatics and systems biology visualization toolkit harmonizing real time interactive exploring and analyzing of big data, full-fledged customizing of look-n-feel, and producing multi-panel publication-ready figures in PDF format simultaneously. Source code and detailed user guides are available at http://canvasxpress.org, https://baohongz.github.io/canvasDesigner, and https://baohongz.github.io/canvasDesigner/demo_video.html. isaac.neuhaus@bms.com, baohong.zhang@pfizer.com, shanrong.zhao@pfizer.com. Supplementary materials are available at https://goo.gl/1uQygs.

  14. Interactive web-based identification and visualization of transcript shared sequences.

    PubMed

    Azhir, Alaleh; Merino, Louis-Henri; Nauen, David W

    2018-05-12

    We have developed TraC (Transcript Consensus), a web-based tool for detecting and visualizing shared sequences among two or more mRNA transcripts such as splice variants. Results including exon-exon boundaries are returned in a highly intuitive, data-rich, interactive plot that permits users to explore the similarities and differences of multiple transcript sequences. The online tool (http://labs.pathology.jhu.edu/nauen/trac/) is free to use. The source code is freely available for download (https://github.com/nauenlab/TraC). Copyright © 2018 Elsevier Inc. All rights reserved.

  15. NARMER-1: a photon point-kernel code with build-up factors

    NASA Astrophysics Data System (ADS)

    Visonneau, Thierry; Pangault, Laurence; Malouch, Fadhel; Malvagi, Fausto; Dolci, Florence

    2017-09-01

    This paper presents an overview of NARMER-1, the new generation of photon point-kernel code developed by the Reactor Studies and Applied Mathematics Unit (SERMA) at CEA Saclay Center. After a short introduction giving some history points and the current context of development of the code, the paper exposes the principles implemented in the calculation, the physical quantities computed and surveys the generic features: programming language, computer platforms, geometry package, sources description, etc. Moreover, specific and recent features are also detailed: exclusion sphere, tetrahedral meshes, parallel operations. Then some points about verification and validation are presented. Finally we present some tools that can help the user for operations like visualization and pre-treatment.

  16. Heat simulation via Scilab programming

    NASA Astrophysics Data System (ADS)

    Hasan, Mohammad Khatim; Sulaiman, Jumat; Karim, Samsul Arifin Abdul

    2014-07-01

    This paper discussed the used of an open source sofware called Scilab to develop a heat simulator. In this paper, heat equation was used to simulate heat behavior in an object. The simulator was developed using finite difference method. Numerical experiment output show that Scilab can produce a good heat behavior simulation with marvellous visual output with only developing simple computer code.

  17. Visual communications with side information via distributed printing channels: extended multimedia and security perspectives

    NASA Astrophysics Data System (ADS)

    Voloshynovskiy, Sviatoslav V.; Koval, Oleksiy; Deguillaume, Frederic; Pun, Thierry

    2004-06-01

    In this paper we address visual communications via printing channels from an information-theoretic point of view as communications with side information. The solution to this problem addresses important aspects of multimedia data processing, security and management, since printed documents are still the most common form of visual information representation. Two practical approaches to side information communications for printed documents are analyzed in the paper. The first approach represents a layered joint source-channel coding for printed documents. This approach is based on a self-embedding concept where information is first encoded assuming a Wyner-Ziv set-up and then embedded into the original data using a Gel'fand-Pinsker construction and taking into account properties of printing channels. The second approach is based on Wyner-Ziv and Berger-Flynn-Gray set-ups and assumes two separated communications channels where an appropriate distributed coding should be elaborated. The first printing channel is considered to be a direct visual channel for images ("analog" channel with degradations). The second "digital channel" with constrained capacity is considered to be an appropriate auxiliary channel. We demonstrate both theoretically and practically how one can benefit from this sort of "distributed paper communications".

  18. High performance geospatial and climate data visualization using GeoJS

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Beezley, J. D.

    2015-12-01

    GeoJS (https://github.com/OpenGeoscience/geojs) is an open-source library developed to support interactive scientific and geospatial visualization of climate and earth science datasets in a web environment. GeoJS has a convenient application programming interface (API) that enables users to harness the fast performance of WebGL and Canvas 2D APIs with sophisticated Scalable Vector Graphics (SVG) features in a consistent and convenient manner. We started the project in response to the need for an open-source JavaScript library that can combine traditional geographic information systems (GIS) and scientific visualization on the web. Many libraries, some of which are open source, support mapping or other GIS capabilities, but lack the features required to visualize scientific and other geospatial datasets. For instance, such libraries are not be capable of rendering climate plots from NetCDF files, and some libraries are limited in regards to geoinformatics (infovis in a geospatial environment). While libraries such as d3.js are extremely powerful for these kinds of plots, in order to integrate them into other GIS libraries, the construction of geoinformatics visualizations must be completed manually and separately, or the code must somehow be mixed in an unintuitive way.We developed GeoJS with the following motivations:• To create an open-source geovisualization and GIS library that combines scientific visualization with GIS and informatics• To develop an extensible library that can combine data from multiple sources and render them using multiple backends• To build a library that works well with existing scientific visualizations tools such as VTKWe have successfully deployed GeoJS-based applications for multiple domains across various projects. The ClimatePipes project funded by the Department of Energy, for example, used GeoJS to visualize NetCDF datasets from climate data archives. Other projects built visualizations using GeoJS for interactively exploring data and analysis regarding 1) the human trafficking domain, 2) New York City taxi drop-offs and pick-ups, and 3) the Ebola outbreak. GeoJS supports advanced visualization features such as picking and selecting, as well as clustering. It also supports 2D contour plots, vector plots, heat maps, and geospatial graphs.

  19. ALPHACAL: A new user-friendly tool for the calibration of alpha-particle sources.

    PubMed

    Timón, A Fernández; Vargas, M Jurado; Gallardo, P Álvarez; Sánchez-Oro, J; Peralta, L

    2018-05-01

    In this work, we present and describe the program ALPHACAL, specifically developed for the calibration of alpha-particle sources. It is therefore more user-friendly and less time-consuming than multipurpose codes developed for a wide range of applications. The program is based on the recently developed code AlfaMC, which simulates specifically the transport of alpha particles. Both cylindrical and point sources mounted on the surface of polished backings can be simulated, as is the convention in experimental measurements of alpha-particle sources. In addition to the efficiency calculation and determination of the backscattering coefficient, some additional tools are available to the user, like the visualization of energy spectrum, use of energy cut-off or low-energy tail corrections. ALPHACAL has been implemented in C++ language using QT library, so it is available for Windows, MacOs and Linux platforms. It is free and can be provided under request to the authors. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Short-term memory coding in children with intellectual disabilities.

    PubMed

    Henry, Lucy

    2008-05-01

    To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and word length effects). Neither the intellectual disabilities nor MA groups showed evidence for memory coding strategies. However, children in these groups with MAs above 6 years showed significant visual similarity and word length effects, broadly consistent with an intermediate stage of dual visual and verbal coding. These results suggest that developmental progressions in memory coding strategies are independent of intellectual disabilities status and consistent with MA.

  1. A Comparison of Source Code Plagiarism Detection Engines

    NASA Astrophysics Data System (ADS)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  2. Auditory spatial processing in the human cortex.

    PubMed

    Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C

    2012-12-01

    The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.

  3. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D

    PubMed Central

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron

    2017-01-01

    Abstract Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. PMID:28814063

  4. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    PubMed

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  5. Alview: Portable Software for Viewing Sequence Reads in BAM Formatted Files.

    PubMed

    Finney, Richard P; Chen, Qing-Rong; Nguyen, Cu V; Hsu, Chih Hao; Yan, Chunhua; Hu, Ying; Abawi, Massih; Bian, Xiaopeng; Meerzaman, Daoud M

    2015-01-01

    The name Alview is a contraction of the term Alignment Viewer. Alview is a compiled to native architecture software tool for visualizing the alignment of sequencing data. Inputs are files of short-read sequences aligned to a reference genome in the SAM/BAM format and files containing reference genome data. Outputs are visualizations of these aligned short reads. Alview is written in portable C with optional graphical user interface (GUI) code written in C, C++, and Objective-C. The application can run in three different ways: as a web server, as a command line tool, or as a native, GUI program. Alview is compatible with Microsoft Windows, Linux, and Apple OS X. It is available as a web demo at https://cgwb.nci.nih.gov/cgi-bin/alview. The source code and Windows/Mac/Linux executables are available via https://github.com/NCIP/alview.

  6. Synchronization trigger control system for flow visualization

    NASA Technical Reports Server (NTRS)

    Chun, K. S.

    1987-01-01

    The use of cinematography or holographic interferometry for dynamic flow visualization in an internal combustion engine requires a control device that globally synchronizes camera and light source timing at a predefined shaft encoder angle. The device is capable of 0.35 deg resolution for rotational speeds of up to 73 240 rpm. This was achieved by implementing the shaft encoder signal addressed look-up table (LUT) and appropriate latches. The developed digital signal processing technique achieves 25 nsec of high speed triggering angle detection by using direct parallel bit comparison of the shaft encoder digital code with a simulated angle reference code, instead of using angle value comparison which involves more complicated computation steps. In order to establish synchronization to an AC reference signal whose magnitude is variant with the rotating speed, a dynamic peak followup synchronization technique has been devised. This method scrutinizes the reference signal and provides the right timing within 40 nsec. Two application examples are described.

  7. Tetrahedral Hohlraum Visualization and Pointings

    NASA Astrophysics Data System (ADS)

    Klare, K. A.; Wallace, J. M.; Drake, D.

    1997-11-01

    In designing experiments for Omega, the tetrahedral hohlraum (a sphere with four holes) can make full use of all 60 beams. There are some complications: the beams must clear the laser entrance hole (LEH), must miss a central capsule, absolutely must not go out the other LEHs, and should distribute in the interior of the hohlraum to maximize the uniformity of irradiation on the capsule while keeping reasonable laser spot sizes. We created a 15-offset coordinate system with which an IDL program computes clearances, writes a file for QuickDraw 3D (QD3D) visualization, and writes input for the viewfactor code RAYNA IV. Visualizing and adjusting the parameters by eye gave more reliable results than computer optimization. QD3D images permitted quick live rotations to determine offsets. The clearances obtained insured safe operation and good physics. The viewfactor code computes the initial irradiation of the hohlraum and capsule or of a uniform hohlraum source with the loss through the four LEHs and shows a high degree of uniformity with both, better for lasers because this deposits more energy near the LEHs to compensate for the holes.

  8. Evaluation of a visual layering methodology for colour coding control room displays.

    PubMed

    Van Laar, Darren; Deshe, Ofer

    2002-07-01

    Eighteen people participated in an experiment in which they were asked to search for targets on control room like displays which had been produced using three different coding methods. The monochrome coding method displayed the information in black and white only, the maximally discriminable method contained colours chosen for their high perceptual discriminability, the visual layers method contained colours developed from psychological and cartographic principles which grouped information into a perceptual hierarchy. The visual layers method produced significantly faster search times than the other two coding methods which did not differ significantly from each other. Search time also differed significantly for presentation order and for the method x order interaction. There was no significant difference between the methods in the number of errors made. Participants clearly preferred the visual layers coding method. Proposals are made for the design of experiments to further test and develop the visual layers colour coding methodology.

  9. [Trial of eye drops recognizer for visually disabled persons].

    PubMed

    Okamoto, Norio; Suzuki, Katsuhiko; Mimura, Osamu

    2009-01-01

    The development of a device to enable the visually disabled to differentiate eye drops and their dose. The new instrument is composed of a voice generator and a two-dimensional bar-code reader (LS9208). We designed voice outputs for the visually disabled to state when (number of times) and where (right, left, or both) to administer eye drops. We then determined the minimum bar-code size that can be recognized. After attaching bar-codes of the appropriate size to the lateral or bottom surface of the eye drops container, the readability of the bar-codes was compared. The minimum discrimination bar-code size was 6 mm high x 8.5 mm long. Bar-codes on the bottom surface could be more easily recognized than bar-codes on the side. Our newly-developed device using bar-codes enables visually disabled persons to differentiate eye drops and their doses.

  10. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.

    PubMed

    Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin

    2018-01-01

    We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.

  11. The Programmers' Collective: Fostering Participatory Culture by Making Music Videos in a High School Scratch Coding Workshop

    ERIC Educational Resources Information Center

    Fields, Deborah; Vasudevan, Veena; Kafai, Yasmin B.

    2015-01-01

    We highlight ways to support interest-driven creation of digital media in Scratch, a visual-based programming language and community, within a high school programming workshop. We describe a collaborative approach, the programmers' collective, that builds on social models found in do-it-yourself and open source communities, but with scaffolding…

  12. Classification and simulation of stereoscopic artifacts in mobile 3DTV content

    NASA Astrophysics Data System (ADS)

    Boev, Atanas; Hollosi, Danilo; Gotchev, Atanas; Egiazarian, Karen

    2009-02-01

    We identify, categorize and simulate artifacts which might occur during delivery stereoscopic video to mobile devices. We consider the stages of 3D video delivery dataflow: content creation, conversion to the desired format (multiview or source-plus-depth), coding/decoding, transmission, and visualization on 3D display. Human 3D vision works by assessing various depth cues - accommodation, binocular depth cues, pictorial cues and motion parallax. As a consequence any artifact which modifies these cues impairs the quality of a 3D scene. The perceptibility of each artifact can be estimated through subjective tests. The material for such tests needs to contain various artifacts with different amounts of impairment. We present a system for simulation of these artifacts. The artifacts are organized in groups with similar origins, and each group is simulated by a block in a simulation channel. The channel introduces the following groups of artifacts: sensor limitations, geometric distortions caused by camera optics, spatial and temporal misalignments between video channels, spatial and temporal artifacts caused by coding, transmission losses, and visualization artifacts. For the case of source-plus-depth representation, artifacts caused by format conversion are added as well.

  13. BSDWormer; an Open Source Implementation of a Poisson Wavelet Multiscale Analysis for Potential Fields

    NASA Astrophysics Data System (ADS)

    Horowitz, F. G.; Gaede, O.

    2014-12-01

    Wavelet multiscale edge analysis of potential fields (a.k.a. "worms") has been known since Moreau et al. (1997) and was independently derived by Hornby et al. (1999). The technique is useful for producing a scale-explicit overview of the structures beneath a gravity or magnetic survey, including establishing the location and estimating the attitude of surface features, as well as incorporating information about the geometric class (point, line, surface, volume, fractal) of the underlying sources — in a fashion much like traditional structural indices from Euler solutions albeit with better areal coverage. Hornby et al. (2002) show that worms form the locally highest concentration of horizontal edges of a given strike — which in conjunction with the results from Mallat and Zhong (1992) induces a (non-unique!) inversion where the worms are physically interpretable as lateral boundaries in a source distribution that produces a close approximation of the observed potential field. The technique has enjoyed widespread adoption and success in the Australian mineral exploration community — including "ground truth" via successfully drilling structures indicated by the worms. Unfortunately, to our knowledge, all implementations of the code to calculate the worms/multiscale edges (including Horowitz' original research code) are either part of commercial software packages, or have copyright restrictions that impede the use of the technique by the wider community. The technique is completely described mathematically in Hornby et al. (1999) along with some later publications. This enables us to re-implement from scratch the code required to calculate and visualize the worms. We are freely releasing the results under an (open source) BSD two-clause software license. A git repository is available at . We will give an overview of the technique, show code snippets using the codebase, and present visualization results for example datasets (including the Surat basin of Australia, and the Lake Ontario region of North America). We invite you to join us in creating and using the best worming software for potential fields in existence — as both gratis and libre software!

  14. VisTrails SAHM: visualization and workflow management for species habitat modeling

    USGS Publications Warehouse

    Morisette, Jeffrey T.; Jarnevich, Catherine S.; Holcombe, Tracy R.; Talbert, Colin B.; Ignizio, Drew A.; Talbert, Marian; Silva, Claudio; Koop, David; Swanson, Alan; Young, Nicholas E.

    2013-01-01

    The Software for Assisted Habitat Modeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre- and post-processing steps and modeling options incorporated in the construction of a species distribution model through the established workflow management and visualization VisTrails software. This paper provides an overview of the VisTrails:SAHM software including a link to the open source code, a table detailing the current SAHM modules, and a simple example modeling an invasive weed species in Rocky Mountain National Park, USA.

  15. MetExploreViz: web component for interactive metabolic network visualization.

    PubMed

    Chazalviel, Maxime; Frainay, Clément; Poupin, Nathalie; Vinson, Florence; Merlet, Benjamin; Gloaguen, Yoann; Cottret, Ludovic; Jourdan, Fabien

    2017-09-15

    MetExploreViz is an open source web component that can be easily embedded in any web site. It provides features dedicated to the visualization of metabolic networks and pathways and thus offers a flexible solution to analyze omics data in a biochemical context. Documentation and link to GIT code repository (GPL 3.0 license)are available at this URL: http://metexplore.toulouse.inra.fr/metexploreViz/doc /. Tutorial is available at this URL. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. SCRAM: a pipeline for fast index-free small RNA read alignment and visualization.

    PubMed

    Fletcher, Stephen J; Boden, Mikael; Mitter, Neena; Carroll, Bernard J

    2018-03-15

    Small RNAs play key roles in gene regulation, defense against viral pathogens and maintenance of genome stability, though many aspects of their biogenesis and function remain to be elucidated. SCRAM (Small Complementary RNA Mapper) is a novel, simple-to-use short read aligner and visualization suite that enhances exploration of small RNA datasets. The SCRAM pipeline is implemented in Go and Python, and is freely available under MIT license. Source code, multiplatform binaries and a Docker image can be accessed via https://sfletc.github.io/scram/. s.fletcher@uq.edu.au. Supplementary data are available at Bioinformatics online.

  17. CytoSPADE: high-performance analysis and visualization of high-dimensional cytometry data

    PubMed Central

    Linderman, Michael D.; Simonds, Erin F.; Qiu, Peng; Bruggner, Robert V.; Sheode, Ketaki; Meng, Teresa H.; Plevritis, Sylvia K.; Nolan, Garry P.

    2012-01-01

    Motivation: Recent advances in flow cytometry enable simultaneous single-cell measurement of 30+ surface and intracellular proteins. CytoSPADE is a high-performance implementation of an interface for the Spanning-tree Progression Analysis of Density-normalized Events algorithm for tree-based analysis and visualization of this high-dimensional cytometry data. Availability: Source code and binaries are freely available at http://cytospade.org and via Bioconductor version 2.10 onwards for Linux, OSX and Windows. CytoSPADE is implemented in R, C++ and Java. Contact: michael.linderman@mssm.edu Supplementary Information: Additional documentation available at http://cytospade.org. PMID:22782546

  18. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    PubMed Central

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  19. Four types of ensemble coding in data visualizations.

    PubMed

    Szafir, Danielle Albers; Haroz, Steve; Gleicher, Michael; Franconeri, Steven

    2016-01-01

    Ensemble coding supports rapid extraction of visual statistics about distributed visual information. Researchers typically study this ability with the goal of drawing conclusions about how such coding extracts information from natural scenes. Here we argue that a second domain can serve as another strong inspiration for understanding ensemble coding: graphs, maps, and other visual presentations of data. Data visualizations allow observers to leverage their ability to perform visual ensemble statistics on distributions of spatial or featural visual information to estimate actual statistics on data. We survey the types of visual statistical tasks that occur within data visualizations across everyday examples, such as scatterplots, and more specialized images, such as weather maps or depictions of patterns in text. We divide these tasks into four categories: identification of sets of values, summarization across those values, segmentation of collections, and estimation of structure. We point to unanswered questions for each category and give examples of such cross-pollination in the current literature. Increased collaboration between the data visualization and perceptual psychology research communities can inspire new solutions to challenges in visualization while simultaneously exposing unsolved problems in perception research.

  20. An interactive web application for the dissemination of human systems immunology data.

    PubMed

    Speake, Cate; Presnell, Scott; Domico, Kelly; Zeitner, Brad; Bjork, Anna; Anderson, David; Mason, Michael J; Whalen, Elizabeth; Vargas, Olivia; Popov, Dimitry; Rinchai, Darawan; Jourde-Chiche, Noemie; Chiche, Laurent; Quinn, Charlie; Chaussabel, Damien

    2015-06-19

    Systems immunology approaches have proven invaluable in translational research settings. The current rate at which large-scale datasets are generated presents unique challenges and opportunities. Mining aggregates of these datasets could accelerate the pace of discovery, but new solutions are needed to integrate the heterogeneous data types with the contextual information that is necessary for interpretation. In addition, enabling tools and technologies facilitating investigators' interaction with large-scale datasets must be developed in order to promote insight and foster knowledge discovery. State of the art application programming was employed to develop an interactive web application for browsing and visualizing large and complex datasets. A collection of human immune transcriptome datasets were loaded alongside contextual information about the samples. We provide a resource enabling interactive query and navigation of transcriptome datasets relevant to human immunology research. Detailed information about studies and samples are displayed dynamically; if desired the associated data can be downloaded. Custom interactive visualizations of the data can be shared via email or social media. This application can be used to browse context-rich systems-scale data within and across systems immunology studies. This resource is publicly available online at [Gene Expression Browser Landing Page ( https://gxb.benaroyaresearch.org/dm3/landing.gsp )]. The source code is also available openly [Gene Expression Browser Source Code ( https://github.com/BenaroyaResearch/gxbrowser )]. We have developed a data browsing and visualization application capable of navigating increasingly large and complex datasets generated in the context of immunological studies. This intuitive tool ensures that, whether taken individually or as a whole, such datasets generated at great effort and expense remain interpretable and a ready source of insight for years to come.

  1. FAST: framework for heterogeneous medical image computing and visualization.

    PubMed

    Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-11-01

    Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.

  2. Multisensory guidance of orienting behavior.

    PubMed

    Maier, Joost X; Groh, Jennifer M

    2009-12-01

    We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.

  3. Python for large-scale electrophysiology.

    PubMed

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation ("dimstim"); one for electrophysiological waveform visualization and spike sorting ("spyke"); and one for spike train and stimulus analysis ("neuropy"). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.

  4. A DCM study of spectral asymmetries in feedforward and feedback connections between visual areas V1 and V4 in the monkey.

    PubMed

    Bastos, A M; Litvak, V; Moran, R; Bosman, C A; Fries, P; Friston, K J

    2015-03-01

    This paper reports a dynamic causal modeling study of electrocorticographic (ECoG) data that addresses functional asymmetries between forward and backward connections in the visual cortical hierarchy. Specifically, we ask whether forward connections employ gamma-band frequencies, while backward connections preferentially use lower (beta-band) frequencies. We addressed this question by modeling empirical cross spectra using a neural mass model equipped with superficial and deep pyramidal cell populations-that model the source of forward and backward connections, respectively. This enabled us to reconstruct the transfer functions and associated spectra of specific subpopulations within cortical sources. We first established that Bayesian model comparison was able to discriminate between forward and backward connections, defined in terms of their cells of origin. We then confirmed that model selection was able to identify extrastriate (V4) sources as being hierarchically higher than early visual (V1) sources. Finally, an examination of the auto spectra and transfer functions associated with superficial and deep pyramidal cells confirmed that forward connections employed predominantly higher (gamma) frequencies, while backward connections were mediated by lower (alpha/beta) frequencies. We discuss these findings in relation to current views about alpha, beta, and gamma oscillations and predictive coding in the brain. Copyright © 2015. Published by Elsevier Inc.

  5. Cytoscape.js: a graph theory library for visualisation and analysis.

    PubMed

    Franz, Max; Lopes, Christian T; Huck, Gerardo; Dong, Yue; Sumer, Onur; Bader, Gary D

    2016-01-15

    Cytoscape.js is an open-source JavaScript-based graph library. Its most common use case is as a visualization software component, so it can be used to render interactive graphs in a web browser. It also can be used in a headless manner, useful for graph operations on a server, such as Node.js. Cytoscape.js is implemented in JavaScript. Documentation, downloads and source code are available at http://js.cytoscape.org. gary.bader@utoronto.ca. © The Author 2015. Published by Oxford University Press.

  6. Scalable nanohelices for predictive studies and enhanced 3D visualization.

    PubMed

    Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P

    2014-11-12

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for mechanical energy harvesting purposes.

  7. SNPversity: a web-based tool for visualizing diversity

    PubMed Central

    Schott, David A; Vinnakota, Abhinav G; Portwood, John L; Andorf, Carson M

    2018-01-01

    Abstract Many stand-alone desktop software suites exist to visualize single nucleotide polymorphism (SNP) diversity, but web-based software that can be easily implemented and used for biological databases is absent. SNPversity was created to answer this need by building an open-source visualization tool that can be implemented on a Unix-like machine and served through a web browser that can be accessible worldwide. SNPversity consists of a HDF5 database back-end for SNPs, a data exchange layer powered by TASSEL libraries that represent data in JSON format, and an interface layer using PHP to visualize SNP information. SNPversity displays data in real-time through a web browser in grids that are color-coded according to a given SNP’s allelic status and mutational state. SNPversity is currently available at MaizeGDB, the maize community’s database, and will be soon available at GrainGenes, the clade-oriented database for Triticeae and Avena species, including wheat, barley, rye, and oat. The code and documentation are uploaded onto github, and they are freely available to the public. We expect that the tool will be highly useful for other biological databases with a similar need to display SNP diversity through their web interfaces. Database URL: https://www.maizegdb.org/snpversity PMID:29688387

  8. Integrating segmentation methods from the Insight Toolkit into a visualization application.

    PubMed

    Martin, Ken; Ibáñez, Luis; Avila, Lisa; Barré, Sébastien; Kaspersen, Jon H

    2005-12-01

    The Insight Toolkit (ITK) initiative from the National Library of Medicine has provided a suite of state-of-the-art segmentation and registration algorithms ideally suited to volume visualization and analysis. A volume visualization application that effectively utilizes these algorithms provides many benefits: it allows access to ITK functionality for non-programmers, it creates a vehicle for sharing and comparing segmentation techniques, and it serves as a visual debugger for algorithm developers. This paper describes the integration of image processing functionalities provided by the ITK into VolView, a visualization application for high performance volume rendering. A free version of this visualization application is publicly available and is available in the online version of this paper. The process for developing ITK plugins for VolView according to the publicly available API is described in detail, and an application of ITK VolView plugins to the segmentation of Abdominal Aortic Aneurysms (AAAs) is presented. The source code of the ITK plugins is also publicly available and it is included in the online version.

  9. Hybrid 3D visualization of the chest and virtual endoscopy of the tracheobronchial system: possibilities and limitations of clinical application.

    PubMed

    Seemann, M D; Claussen, C D

    2001-06-01

    A hybrid rendering method which combines a color-coded surface rendering method and a volume rendering method is described, which enables virtual endoscopic examinations using different representation models. 14 patients with malignancies of the lung and mediastinum (n=11) and lung transplantation (n=3) underwent thin-section spiral computed tomography. The tracheobronchial system and anatomical and pathological features of the chest were segmented using an interactive threshold interval volume-growing segmentation algorithm and visualized with a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures. For the virtual endoscopy of the tracheobronchial system, a shaded-surface model without color coding, a transparent color-coded shaded-surface model and a triangle-surface model were tested and compared. The hybrid rendering technique exploit the advantages of both rendering methods, provides an excellent overview of the tracheobronchial system and allows a clear depiction of the complex spatial relationships of anatomical and pathological features. Virtual bronchoscopy with a transparent color-coded shaded-surface model allows both a simultaneous visualization of an airway, an airway lesion and mediastinal structures and a quantitative assessment of the spatial relationship between these structures, thus improving confidence in the diagnosis of endotracheal and endobronchial diseases. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images. Virtual bronchoscopy with a transparent color-coded shaded-surface model offers a practical alternative to fiberoptic bronchoscopy and is particularly promising for patients in whom fiberoptic bronchoscopy is not feasible, contraindicated or refused. Furthermore, it can be used as a complementary procedure to fiberoptic bronchoscopy in evaluating airway stenosis and guiding bronchoscopic biopsy, surgical intervention and palliative therapy and is likely to be increasingly accepted as a screening method for people with suspected endobronchial malignancy and as control examination in the aftercare of patients with malignant diseases.

  10. Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.

  11. Access Based Cost Estimation for Beddown Analysis

    DTIC Science & Technology

    2006-03-23

    logic. This research expands upon the existing research by using Visual Basic for Applications ( VBA ) to further customize and streamline the...methods with the use of VBA . Calculations are completed in either underlying Form VBA code or through global modules accessible throughout the...query and SQL referencing. Attempts were made where possible to align data structures with possible external sources to minimize import errors and

  12. Non-Linear Acoustic Concealed Weapons Detector

    DTIC Science & Technology

    2006-05-01

    signature analysis 8 the interactions of the beams with concealed objects. The Khokhlov- Zabolotskaya-Kuznetsov ( KZK ) equation is the most widely used...Hamilton developed a finite difference method based on the KZK equation to model pulsed acoustic emissions from axial symmetric sources. Using a...College of William & Mary, we have developed a simulation code using the KZK equation to model non-linear acoustic beams and visualize beam patterns

  13. CHiCP: a web-based tool for the integrative and interactive visualization of promoter capture Hi-C datasets.

    PubMed

    Schofield, E C; Carver, T; Achuthan, P; Freire-Pritchett, P; Spivakov, M; Todd, J A; Burren, O S

    2016-08-15

    Promoter capture Hi-C (PCHi-C) allows the genome-wide interrogation of physical interactions between distal DNA regulatory elements and gene promoters in multiple tissue contexts. Visual integration of the resultant chromosome interaction maps with other sources of genomic annotations can provide insight into underlying regulatory mechanisms. We have developed Capture HiC Plotter (CHiCP), a web-based tool that allows interactive exploration of PCHi-C interaction maps and integration with both public and user-defined genomic datasets. CHiCP is freely accessible from www.chicp.org and supports most major HTML5 compliant web browsers. Full source code and installation instructions are available from http://github.com/D-I-L/django-chicp ob219@cam.ac.uk. © The Author 2016. Published by Oxford University Press. All rights reserved.

  14. Reduced adaptability, but no fundamental disruption, of norm-based face coding following early visual deprivation from congenital cataracts.

    PubMed

    Rhodes, Gillian; Nishimura, Mayu; de Heering, Adelaide; Jeffery, Linda; Maurer, Daphne

    2017-05-01

    Faces are adaptively coded relative to visual norms that are updated by experience, and this adaptive coding is linked to face recognition ability. Here we investigated whether adaptive coding of faces is disrupted in individuals (adolescents and adults) who experience face recognition difficulties following visual deprivation from congenital cataracts in infancy. We measured adaptive coding using face identity aftereffects, where smaller aftereffects indicate less adaptive updating of face-coding mechanisms by experience. We also examined whether the aftereffects increase with adaptor identity strength, consistent with norm-based coding of identity, as in typical populations, or whether they show a different pattern indicating some more fundamental disruption of face-coding mechanisms. Cataract-reversal patients showed significantly smaller face identity aftereffects than did controls (Experiments 1 and 2). However, their aftereffects increased significantly with adaptor strength, consistent with norm-based coding (Experiment 2). Thus we found reduced adaptability but no fundamental disruption of norm-based face-coding mechanisms in cataract-reversal patients. Our results suggest that early visual experience is important for the normal development of adaptive face-coding mechanisms. © 2016 John Wiley & Sons Ltd.

  15. Nmrglue: an open source Python package for the analysis of multidimensional NMR data.

    PubMed

    Helmus, Jonathan J; Jaroniec, Christopher P

    2013-04-01

    Nmrglue, an open source Python package for working with multidimensional NMR data, is described. When used in combination with other Python scientific libraries, nmrglue provides a highly flexible and robust environment for spectral processing, analysis and visualization and includes a number of common utilities such as linear prediction, peak picking and lineshape fitting. The package also enables existing NMR software programs to be readily tied together, currently facilitating the reading, writing and conversion of data stored in Bruker, Agilent/Varian, NMRPipe, Sparky, SIMPSON, and Rowland NMR Toolkit file formats. In addition to standard applications, the versatility offered by nmrglue makes the package particularly suitable for tasks that include manipulating raw spectrometer data files, automated quantitative analysis of multidimensional NMR spectra with irregular lineshapes such as those frequently encountered in the context of biomacromolecular solid-state NMR, and rapid implementation and development of unconventional data processing methods such as covariance NMR and other non-Fourier approaches. Detailed documentation, install files and source code for nmrglue are freely available at http://nmrglue.com. The source code can be redistributed and modified under the New BSD license.

  16. Nmrglue: An Open Source Python Package for the Analysis of Multidimensional NMR Data

    PubMed Central

    Helmus, Jonathan J.; Jaroniec, Christopher P.

    2013-01-01

    Nmrglue, an open source Python package for working with multidimensional NMR data, is described. When used in combination with other Python scientific libraries, nmrglue provides a highly flexible and robust environment for spectral processing, analysis and visualization and includes a number of common utilities such as linear prediction, peak picking and lineshape fitting. The package also enables existing NMR software programs to be readily tied together, currently facilitating the reading, writing and conversion of data stored in Bruker, Agilent/Varian, NMRPipe, Sparky, SIMPSON, and Rowland NMR Toolkit file formats. In addition to standard applications, the versatility offered by nmrglue makes the package particularly suitable for tasks that include manipulating raw spectrometer data files, automated quantitative analysis of multidimensional NMR spectra with irregular lineshapes such as those frequently encountered in the context of biomacromolecular solid-state NMR, and rapid implementation and development of unconventional data processing methods such as covariance NMR and other non-Fourier approaches. Detailed documentation, install files and source code for nmrglue are freely available at http://nmrglue.com. The source code can be redistributed and modified under the New BSD license. PMID:23456039

  17. SmartR: an open-source platform for interactive visual analytics for translational research data

    PubMed Central

    Herzinger, Sascha; Gu, Wei; Satagopam, Venkata; Eifes, Serge; Rege, Kavita; Barbosa-Silva, Adriano; Schneider, Reinhard

    2017-01-01

    Abstract Summary: In translational research, efficient knowledge exchange between the different fields of expertise is crucial. An open platform that is capable of storing a multitude of data types such as clinical, pre-clinical or OMICS data combined with strong visual analytical capabilities will significantly accelerate the scientific progress by making data more accessible and hypothesis generation easier. The open data warehouse tranSMART is capable of storing a variety of data types and has a growing user community including both academic institutions and pharmaceutical companies. tranSMART, however, currently lacks interactive and dynamic visual analytics and does not permit any post-processing interaction or exploration. For this reason, we developed SmartR, a plugin for tranSMART, that equips the platform not only with several dynamic visual analytical workflows, but also provides its own framework for the addition of new custom workflows. Modern web technologies such as D3.js or AngularJS were used to build a set of standard visualizations that were heavily improved with dynamic elements. Availability and Implementation: The source code is licensed under the Apache 2.0 License and is freely available on GitHub: https://github.com/transmart/SmartR. Contact: reinhard.schneider@uni.lu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28334291

  18. SmartR: an open-source platform for interactive visual analytics for translational research data.

    PubMed

    Herzinger, Sascha; Gu, Wei; Satagopam, Venkata; Eifes, Serge; Rege, Kavita; Barbosa-Silva, Adriano; Schneider, Reinhard

    2017-07-15

    In translational research, efficient knowledge exchange between the different fields of expertise is crucial. An open platform that is capable of storing a multitude of data types such as clinical, pre-clinical or OMICS data combined with strong visual analytical capabilities will significantly accelerate the scientific progress by making data more accessible and hypothesis generation easier. The open data warehouse tranSMART is capable of storing a variety of data types and has a growing user community including both academic institutions and pharmaceutical companies. tranSMART, however, currently lacks interactive and dynamic visual analytics and does not permit any post-processing interaction or exploration. For this reason, we developed SmartR , a plugin for tranSMART, that equips the platform not only with several dynamic visual analytical workflows, but also provides its own framework for the addition of new custom workflows. Modern web technologies such as D3.js or AngularJS were used to build a set of standard visualizations that were heavily improved with dynamic elements. The source code is licensed under the Apache 2.0 License and is freely available on GitHub: https://github.com/transmart/SmartR . reinhard.schneider@uni.lu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  19. EPA Remote Sensing Information Gateway

    NASA Astrophysics Data System (ADS)

    Paulsen, H. K.; Szykman, J. J.; Plessel, T.; Freeman, M.; Dimmick, F.

    2009-12-01

    The Remote Sensing Information Gateway was developed by the U.S. Environmental Protection Agency (EPA) to assist researchers in easily obtaining and combining a variety of environmental datasets related to air quality research. Current datasets available include, but are not limited to surface PM2.5 and O3 data, satellite derived aerosol optical depth , and 3-dimensional output from U.S. EPA's Models 3/Community Multi-scale Air Quality (CMAQ) modeling system. The presentation will include a demonstration that illustrates several scenarios of how researchers use the tool to help them visualize and obtain data for their work; with a particular focus on episode analysis related to biomass burning impacts on air quality. The presentation will provide an overview on how RSIG works and how the code has been—and can be—adapted for other projects. One example is the Virtual Estuary, which focuses on automating the retrieval and pre-processing of a variety of data needed for estuarine research. RSIG’s source codes are freely available to researchers with permission from the EPA principal investigator, Dr. Jim Szykman. RSIG is available to the community and can be accessed online at http://www.epa.gov/rsig. Once the JAVA policy file is configured on your computer you can run the RSIG applet on your computer and connect to the RSIG server to visualize and retrieve available data sets. The applet allows the user to specify the temporal/spatial areas of interest, and the types of data to retrieve. The applet then communicates with RSIG subsetter codes located on the data owners’ remote servers; the subsetter codes assemble and transfer via ordinary Internet protocols only the specified data to the researcher’s computer. This is much faster than the usual method of transferring large files via FTP and greatly reduces network traffic. The RSIG applet then visualizes the transferred data on a latitude-longitude map, automatically locating the data in the correct geographic position. Images, animations, and aggregated data can be saved or exported in a variety of data formats: Binary External Data Representation (XDR) format (simple, efficient), NetCDF-COARDS format, NetCDF-IOAPI format (regridding the data to a CMAQ grid), HDF (unsubsetted satellite files), ASCII tab-delimited spreadsheet, MCMC (used for input into HB model), PNG images, MPG movies, KMZ movies (for display in Google Earth and similar applications), GeoTIFF RGB format and 32-bit float format. RSIG’s source codes are freely available to researchers with permission from the EPA. Contacts for obtaining RSIG code are available at the RSIG website.

  20. jSquid: a Java applet for graphical on-line network exploration.

    PubMed

    Klammer, Martin; Roopra, Sanjit; Sonnhammer, Erik L L

    2008-06-15

    jSquid is a graph visualization tool for exploring graphs from protein-protein interaction or functional coupling networks. The tool was designed for the FunCoup web site, but can be used for any similar network exploring purpose. The program offers various visualization and graph manipulation techniques to increase the utility for the user. jSquid is available for direct usage and download at http://jSquid.sbc.su.se including source code under the GPLv3 license, and input examples. It requires Java version 5 or higher to run properly. erik.sonnhammer@sbc.su.se Supplementary data are available at Bioinformatics online.

  1. fluff: exploratory analysis and visualization of high-throughput sequencing data

    PubMed Central

    Georgiou, Georgios

    2016-01-01

    Summary. In this article we describe fluff, a software package that allows for simple exploration, clustering and visualization of high-throughput sequencing data mapped to a reference genome. The package contains three command-line tools to generate publication-quality figures in an uncomplicated manner using sensible defaults. Genome-wide data can be aggregated, clustered and visualized in a heatmap, according to different clustering methods. This includes a predefined setting to identify dynamic clusters between different conditions or developmental stages. Alternatively, clustered data can be visualized in a bandplot. Finally, fluff includes a tool to generate genomic profiles. As command-line tools, the fluff programs can easily be integrated into standard analysis pipelines. The installation is straightforward and documentation is available at http://fluff.readthedocs.org. Availability. fluff is implemented in Python and runs on Linux. The source code is freely available for download at https://github.com/simonvh/fluff. PMID:27547532

  2. Python for Large-Scale Electrophysiology

    PubMed Central

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation (“dimstim”); one for electrophysiological waveform visualization and spike sorting (“spyke”); and one for spike train and stimulus analysis (“neuropy”). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience. PMID:19198646

  3. Visualizing the gravitational lensing and vortex and tendex lines of colliding black holes

    NASA Astrophysics Data System (ADS)

    Khan, Haroon; Lovelace, Geoffery; SXS Collaboration

    2016-03-01

    Gravitational waves (GW's) are ripples of space and time that are created when the universe unleashes its violent nature in the presence of strong gravity. Merging black holes (BH) are one of the most promising sources of GW's. In order to detect and physically study the GW's emitted by merging BH with ground based detectors such as Advanced LIGO, we must accurately predict how the waveforms look and behave. This can only be done by numerically simulating BH mergers on supercomputers, because all analytical approximations fail near the time of merger. This poster focuses on using these simulations to answer the question of ``What do merging BH look like''? I will present visualizations made using the Spectral Einstein Code (SpEC) and in particular a raytracing lensing code, developed by the SXS Lensing team, that shows how merging BH bend the light around them. I will also present visualizations of the vortex and tendex lines for a binary BH system, using SpEC. Vortex lines describe how an observer will be twisted by the BH and the tendex lines describe how much an observer would be stretched and squeezed. I am exploring how these lines change with time.

  4. Phonological, visual, and semantic coding strategies and children's short-term picture memory span.

    PubMed

    Henry, Lucy A; Messer, David; Luger-Klein, Scarlett; Crane, Laura

    2012-01-01

    Three experiments addressed controversies in the previous literature on the development of phonological and other forms of short-term memory coding in children, using assessments of picture memory span that ruled out potentially confounding effects of verbal input and output. Picture materials were varied in terms of phonological similarity, visual similarity, semantic similarity, and word length. Older children (6/8-year-olds), but not younger children (4/5-year-olds), demonstrated robust and consistent phonological similarity and word length effects, indicating that they were using phonological coding strategies. This confirmed findings initially reported by Conrad (1971), but subsequently questioned by other authors. However, in contrast to some previous research, little evidence was found for a distinct visual coding stage at 4 years, casting doubt on assumptions that this is a developmental stage that consistently precedes phonological coding. There was some evidence for a dual visual and phonological coding stage prior to exclusive use of phonological coding at around 5-6 years. Evidence for semantic similarity effects was limited, suggesting that semantic coding is not a key method by which young children recall lists of pictures.

  5. Employee satisfaction: code red in the workplace?

    PubMed

    McBride, Eldon L

    2002-09-01

    Concerns about employee satisfaction are just as critical in the health care industry as they are in other business sectors. This article highlights the commonality of employee opinion survey questions (from 3 sources) that most highly correlate with employee satisfaction. A visual coaching tool for managers will be introduced that enables managers to facilitate constructive work group conversations about their experience of workplace satisfaction and generate ideas they can implement for improvement.

  6. Social and Political Event Data to Support Army Requirements: Volume 1

    DTIC Science & Technology

    2017-11-01

    available information . Geographic data at the city level is not enough spatial fidelity for tactical-level analyses. Vi- olent Events Socio-Cultural...analyze and/or visualize the data to produce mission-relevant information . Hand-coded datasets can be more precise, but they require added time and labor...Figure 4. Process to transform event data into mission-relevant information . ................... 35 Tables Table 1. Sources of event data

  7. An Extensible NetLogo Model for Visualizing Message Routing Protocols

    DTIC Science & Technology

    2017-08-01

    the hard sciences to the social sciences to computer-generated art. NetLogo represents the world as a set of...describe the model is shown here; for the supporting methods , refer to the source code. Approved for public release; distribution is unlimited. 4 iv...if ticks - last-inject > time-to-inject [inject] if run# > #runs [stop] end Next, we present some basic statistics collected for the

  8. Visible Languages for Program Visualization

    DTIC Science & Technology

    1986-02-01

    Comments 38 The Presentation of Program Metadata 39 The Spatial Composition of Comments 41 The Typography of Punctuation 42 Typographic Encodings... Typography of Program Punctuation 6. In this example the "" appears in 10 point regular Helvetica type, and thus uses the same typographic parameters as...Results. Conclusions Chapter 4 Graphic Design of C Source Code and Comments Section 4 3 1 he Typography of Punctuation Page 41 l ft. Section

  9. Visual search asymmetries within color-coded and intensity-coded displays.

    PubMed

    Yamani, Yusuke; McCarley, Jason S

    2010-06-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information. The design of symbology to produce search asymmetries (Treisman & Souther, 1985) offers a potential technique for doing this, but it is not obvious from existing models of search that an asymmetry observed in the absence of extraneous visual stimuli will persist within a complex color- or intensity-coded display. To address this issue, in the current study we measured the strength of a visual search asymmetry within displays containing color- or intensity-coded extraneous items. The asymmetry persisted strongly in the presence of extraneous items that were drawn in a different color (Experiment 1) or a lower contrast (Experiment 2) than the search-relevant items, with the targets favored by the search asymmetry producing highly efficient search. The asymmetry was attenuated but not eliminated when extraneous items were drawn in a higher contrast than search-relevant items (Experiment 3). Results imply that the coding of symbology to exploit visual search asymmetries can facilitate visual search for high-priority items even within color- or intensity-coded displays. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  10. Microsoft C#.NET program and electromagnetic depth sounding for large loop source

    NASA Astrophysics Data System (ADS)

    Prabhakar Rao, K.; Ashok Babu, G.

    2009-07-01

    A program, in the C# (C Sharp) language with Microsoft.NET Framework, is developed to compute the normalized vertical magnetic field of a horizontal rectangular loop source placed on the surface of an n-layered earth. The field can be calculated either inside or outside the loop. Five C# classes with member functions in each class are, designed to compute the kernel, Hankel transform integral, coefficients for cubic spline interpolation between computed values and the normalized vertical magnetic field. The program computes the vertical magnetic field in the frequency domain using the integral expressions evaluated by a combination of straightforward numerical integration and the digital filter technique. The code utilizes different object-oriented programming (OOP) features. It finally computes the amplitude and phase of the normalized vertical magnetic field. The computed results are presented for geometric and parametric soundings. The code is developed in Microsoft.NET visual studio 2003 and uses various system class libraries.

  11. A universal Model-R Coupler to facilitate the use of R functions for model calibration and analysis

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Yan, Wende

    2014-01-01

    Mathematical models are useful in various fields of science and engineering. However, it is a challenge to make a model utilize the open and growing functions (e.g., model inversion) on the R platform due to the requirement of accessing and revising the model's source code. To overcome this barrier, we developed a universal tool that aims to convert a model developed in any computer language to an R function using the template and instruction concept of the Parameter ESTimation program (PEST) and the operational structure of the R-Soil and Water Assessment Tool (R-SWAT). The developed tool (Model-R Coupler) is promising because users of any model can connect an external algorithm (written in R) with their model to implement various model behavior analyses (e.g., parameter optimization, sensitivity and uncertainty analysis, performance evaluation, and visualization) without accessing or modifying the model's source code.

  12. Robust video transmission with distributed source coded auxiliary channel.

    PubMed

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  13. The IRGen infrared data base modeler

    NASA Technical Reports Server (NTRS)

    Bernstein, Uri

    1993-01-01

    IRGen is a modeling system which creates three-dimensional IR data bases for real-time simulation of thermal IR sensors. Starting from a visual data base, IRGen computes the temperature and radiance of every data base surface with a user-specified thermal environment. The predicted gray shade of each surface is then computed from the user specified sensor characteristics. IRGen is based on first-principles models of heat transport and heat flux sources, and it accurately simulates the variations of IR imagery with time of day and with changing environmental conditions. The starting point for creating an IRGen data base is a visual faceted data base, in which every facet has been labeled with a material code. This code is an index into a material data base which contains surface and bulk thermal properties for the material. IRGen uses the material properties to compute the surface temperature at the specified time of day. IRGen also supports image generator features such as texturing and smooth shading, which greatly enhance image realism.

  14. Role of Oculoproprioception in Coding the Locus of Attention.

    PubMed

    Odoj, Bartholomaeus; Balslev, Daniela

    2016-03-01

    The most common neural representations for spatial attention encode locations retinotopically, relative to center of gaze. To keep track of visual objects across saccades or to orient toward sounds, retinotopic representations must be combined with information about the rotation of one's own eyes in the orbits. Although gaze input is critical for a correct allocation of attention, the source of this input has so far remained unidentified. Two main signals are available: corollary discharge (copy of oculomotor command) and oculoproprioception (feedback from extraocular muscles). Here we asked whether the oculoproprioceptive signal relayed from the somatosensory cortex contributes to coding the locus of attention. We used continuous theta burst stimulation (cTBS) over a human oculoproprioceptive area in the postcentral gyrus (S1EYE). S1EYE-cTBS reduces proprioceptive processing, causing ∼1° underestimation of gaze angle. Participants discriminated visual targets whose location was cued in a nonvisual modality. Throughout the visual space, S1EYE-cTBS shifted the locus of attention away from the cue by ∼1°, in the same direction and by the same magnitude as the oculoproprioceptive bias. This systematic shift cannot be attributed to visual mislocalization. Accuracy of open-loop pointing to the same visual targets, a function thought to rely mainly on the corollary discharge, was unchanged. We argue that oculoproprioception is selective for attention maps. By identifying a potential substrate for the coupling between eye and attention, this study contributes to the theoretical models for spatial attention.

  15. SimITK: rapid ITK prototyping using the Simulink visual programming environment

    NASA Astrophysics Data System (ADS)

    Dickinson, A. W. L.; Mousavi, P.; Gobbi, D. G.; Abolmaesumi, P.

    2011-03-01

    The Insight Segmentation and Registration Toolkit (ITK) is a long-established, software package used for image analysis, visualization, and image-guided surgery applications. This package is a collection of C++ libraries, that can pose usability problems for users without C++ programming experience. To bridge the gap between the programming complexities and the required learning curve of ITK, we present a higher-level visual programming environment that represents ITK methods and classes by wrapping them into "blocks" within MATLAB's visual programming environment, Simulink. These blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. Due to the heavily C++ templated nature of ITK, direct interaction between Simulink and ITK requires an intermediary to convert their respective datatypes and allow intercommunication. We have developed a "Virtual Block" that serves as an intermediate wrapper around the ITK class and is responsible for resolving the templated datatypes used by ITK to native types used by Simulink. Presently, the wrapping procedure for SimITK is semi-automatic in that it requires XML descriptions of the ITK classes as a starting point, as this data is used to create all other necessary integration files. The generation of all source code and object code from the XML is done automatically by a CMake build script that yields Simulink blocks as the final result. An example 3D segmentation workflow using cranial-CT data as well as a 3D MR-to-CT registration workflow are presented as a proof-of-concept.

  16. Wireless visual sensor network resource allocation using cross-layer optimization

    NASA Astrophysics Data System (ADS)

    Bentley, Elizabeth S.; Matyjas, John D.; Medley, Michael J.; Kondi, Lisimachos P.

    2009-01-01

    In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.

  17. CALCMIN - an EXCEL™ Visual Basic application for calculating mineral structural formulae from electron microprobe analyses

    NASA Astrophysics Data System (ADS)

    Brandelik, Andreas

    2009-07-01

    CALCMIN, an open source Visual Basic program, was implemented in EXCEL™. The program was primarily developed to support geoscientists in their routine task of calculating structural formulae of minerals on the basis of chemical analysis mainly obtained by electron microprobe (EMP) techniques. Calculation programs for various minerals are already included in the form of sub-routines. These routines are arranged in separate modules containing a minimum of code. The architecture of CALCMIN allows the user to easily develop new calculation routines or modify existing routines with little knowledge of programming techniques. By means of a simple mouse-click, the program automatically generates a rudimentary framework of code using the object model of the Visual Basic Editor (VBE). Within this framework simple commands and functions, which are provided by the program, can be used, for example, to perform various normalization procedures or to output the results of the computations. For the clarity of the code, element symbols are used as variables initialized by the program automatically. CALCMIN does not set any boundaries in complexity of the code used, resulting in a wide range of possible applications. Thus, matrix and optimization methods can be included, for instance, to determine end member contents for subsequent thermodynamic calculations. Diverse input procedures are provided, such as the automated read-in of output files created by the EMP. Furthermore, a subsequent filter routine enables the user to extract specific analyses in order to use them for a corresponding calculation routine. An event-driven, interactive operating mode was selected for easy application of the program. CALCMIN leads the user from the beginning to the end of the calculation process.

  18. HitWalker2: visual analytics for precision medicine and beyond.

    PubMed

    Bottomly, Daniel; McWeeney, Shannon K; Wilmot, Beth

    2016-04-15

    The lack of visualization frameworks to guide interpretation and facilitate discovery is a potential bottleneck for precision medicine, systems genetics and other studies. To address this we have developed an interactive, reproducible, web-based prioritization approach that builds on our earlier work. HitWalker2 is highly flexible and can utilize many data types and prioritization methods based upon available data and desired questions, allowing it to be utilized in a diverse range of studies such as cancer, infectious disease and psychiatric disorders. Source code is freely available at https://github.com/biodev/HitWalker2 and implemented using Python/Django, Neo4j and Javascript (D3.js and jQuery). We support major open source browsers (e.g. Firefox and Chromium/Chrome). wilmotb@ohsu.edu Supplementary data are available at Bioinformatics online. Additional information/instructions are available at https://github.com/biodev/HitWalker2/wiki. © The Author 2015. Published by Oxford University Press.

  19. ePMV embeds molecular modeling into professional animation software environments.

    PubMed

    Johnson, Graham T; Autin, Ludovic; Goodsell, David S; Sanner, Michel F; Olson, Arthur J

    2011-03-09

    Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties, and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. ePMV Embeds Molecular Modeling into Professional Animation Software Environments

    PubMed Central

    Johnson, Graham T.; Autin, Ludovic; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.

    2011-01-01

    SUMMARY Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers, we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. PMID:21397181

  1. cellPACK: A Virtual Mesoscope to Model and Visualize Structural Systems Biology

    PubMed Central

    Johnson, Graham T.; Autin, Ludovic; Al-Alusi, Mostafa; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.

    2014-01-01

    cellPACK assembles computational models of the biological mesoscale, an intermediate scale (10−7–10−8m) between molecular and cellular biology. cellPACK’s modular architecture unites existing and novel packing algorithms to generate, visualize and analyze comprehensive 3D models of complex biological environments that integrate data from multiple experimental systems biology and structural biology sources. cellPACK is currently available as open source code, with tools for validation of models and with recipes and models for five biological systems: blood plasma, cytoplasm, synaptic vesicles, HIV and a mycoplasma cell. We have applied cellPACK to model distributions of HIV envelope protein to test several hypotheses for consistency with experimental observations. Biologists, educators, and outreach specialists can interact with cellPACK models, develop new recipes and perform packing experiments through scripting and graphical user interfaces at http://cellPACK.org. PMID:25437435

  2. LocusExplorer: a user-friendly tool for integrated visualization of human genetic association data and biological annotations.

    PubMed

    Dadaev, Tokhir; Leongamornlert, Daniel A; Saunders, Edward J; Eeles, Rosalind; Kote-Jarai, Zsofia

    2016-03-15

    : In this article, we present LocusExplorer, a data visualization and exploration tool for genetic association data. LocusExplorer is written in R using the Shiny library, providing access to powerful R-based functions through a simple user interface. LocusExplorer allows users to simultaneously display genetic, statistical and biological data for humans in a single image and allows dynamic zooming and customization of the plot features. Publication quality plots may then be produced in a variety of file formats. LocusExplorer is open source and runs through R and a web browser. It is available at www.oncogenetics.icr.ac.uk/LocusExplorer/ or can be installed locally and the source code accessed from https://github.com/oncogenetics/LocusExplorer tokhir.dadaev@icr.ac.uk. © The Author 2015. Published by Oxford University Press.

  3. 3Dmol.js: molecular visualization with WebGL.

    PubMed

    Rego, Nicholas; Koes, David

    2015-04-15

    3Dmol.js is a modern, object-oriented JavaScript library that uses the latest web technologies to provide interactive, hardware-accelerated three-dimensional representations of molecular data without the need to install browser plugins or Java. 3Dmol.js provides a full featured API for developers as well as a straightforward declarative interface that lets users easily share and embed molecular data in websites. 3Dmol.js is distributed under the permissive BSD open source license. Source code and documentation can be found at http://3Dmol.csb.pitt.edu dkoes@pitt.edu. © The Author 2014. Published by Oxford University Press.

  4. Data Visualization Using Immersive Virtual Reality Tools

    NASA Astrophysics Data System (ADS)

    Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.

    2013-01-01

    The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this visualization tool freely available to the academic community within a few months, on an experimental (beta testing) basis.

  5. deFUME: Dynamic exploration of functional metagenomic sequencing data.

    PubMed

    van der Helm, Eric; Geertz-Hansen, Henrik Marcus; Genee, Hans Jasper; Malla, Sailesh; Sommer, Morten Otto Alexander

    2015-07-31

    Functional metagenomic selections represent a powerful technique that is widely applied for identification of novel genes from complex metagenomic sources. However, whereas hundreds to thousands of clones can be easily generated and sequenced over a few days of experiments, analyzing the data is time consuming and constitutes a major bottleneck for experimental researchers in the field. Here we present the deFUME web server, an easy-to-use web-based interface for processing, annotation and visualization of functional metagenomics sequencing data, tailored to meet the requirements of non-bioinformaticians. The web-server integrates multiple analysis steps into one single workflow: read assembly, open reading frame prediction, and annotation with BLAST, InterPro and GO classifiers. Analysis results are visualized in an online dynamic web-interface. The deFUME webserver provides a fast track from raw sequence to a comprehensive visual data overview that facilitates effortless inspection of gene function, clustering and distribution. The webserver is available at cbs.dtu.dk/services/deFUME/and the source code is distributed at github.com/EvdH0/deFUME.

  6. GeneXplorer: an interactive web application for microarray data visualization and analysis.

    PubMed

    Rees, Christian A; Demeter, Janos; Matese, John C; Botstein, David; Sherlock, Gavin

    2004-10-01

    When publishing large-scale microarray datasets, it is of great value to create supplemental websites where either the full data, or selected subsets corresponding to figures within the paper, can be browsed. We set out to create a CGI application containing many of the features of some of the existing standalone software for the visualization of clustered microarray data. We present GeneXplorer, a web application for interactive microarray data visualization and analysis in a web environment. GeneXplorer allows users to browse a microarray dataset in an intuitive fashion. It provides simple access to microarray data over the Internet and uses only HTML and JavaScript to display graphic and annotation information. It provides radar and zoom views of the data, allows display of the nearest neighbors to a gene expression vector based on their Pearson correlations and provides the ability to search gene annotation fields. The software is released under the permissive MIT Open Source license, and the complete documentation and the entire source code are freely available for download from CPAN http://search.cpan.org/dist/Microarray-GeneXplorer/.

  7. Building a Billion Spatio-Temporal Object Search and Visualization Platform

    NASA Astrophysics Data System (ADS)

    Kakkar, D.; Lewis, B.

    2017-10-01

    With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA) has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC), an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  8. ProteoCloud: a full-featured open source proteomics cloud computing pipeline.

    PubMed

    Muth, Thilo; Peters, Julian; Blackburn, Jonathan; Rapp, Erdmann; Martens, Lennart

    2013-08-02

    We here present the ProteoCloud pipeline, a freely available, full-featured cloud-based platform to perform computationally intensive, exhaustive searches in a cloud environment using five different peptide identification algorithms. ProteoCloud is entirely open source, and is built around an easy to use and cross-platform software client with a rich graphical user interface. This client allows full control of the number of cloud instances to initiate and of the spectra to assign for identification. It also enables the user to track progress, and to visualize and interpret the results in detail. Source code, binaries and documentation are all available at http://proteocloud.googlecode.com. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    PubMed

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  10. Coding Local and Global Binary Visual Features Extracted From Video Sequences

    NASA Astrophysics Data System (ADS)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.

  11. A measure of short-term visual memory based on the WISC-R coding subtest.

    PubMed

    Collaer, M L; Evans, J R

    1982-07-01

    Adapted the Coding subtest of the WISC-R to provide a measure of visual memory. Three hundred and five children, aged 8 through 12, were administered the Coding test using standard directions. A few seconds after completion the key was taken away, and each was given a paper with only the digits and asked to write the appropriate matching symbol below each. This was termed "Coding Recall." To provide validity data, a subgroup of 50 Ss also was administered the Attention Span for Letters subtest from the Detroit Tests of Learning Aptitude (as a test of visual memory for sequences of letters) and a Bender Gestalt recall test (as a measure of visual memory for geometric forms). Coding Recall means and standard deviations are reported separately by sex and age level. Implications for clinicans are discussed. Reservations about clinical use of the data are given in view of the possible lack of representativeness of the sample used and the limited reliability and validity of Coding Recall.

  12. Visual Coding of Human Bodies: Perceptual Aftereffects Reveal Norm-Based, Opponent Coding of Body Identity

    ERIC Educational Resources Information Center

    Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.

    2013-01-01

    Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…

  13. Coding the presence of visual objects in a recurrent neural network of visual cortex.

    PubMed

    Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard

    2007-01-01

    Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.

  14. Ensemble coding remains accurate under object and spatial visual working memory load.

    PubMed

    Epstein, Michael L; Emmanouil, Tatiana A

    2017-10-01

    A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.

  15. The Development of Visual Interface Enhancements for Player Input to the JTLS (Joint Theater-Level Simulation) Wargame.

    DTIC Science & Technology

    1987-03-01

    38 7. STEP 4 - CURRENT VERSION ..................................... 40 8 . STEP 4 - PROTOTYPE...1- 4 respectively. Tables 2, 4 , 6, and 8 are the respective prototype versions of source code. There are several noticeable differences between the...prompt in the scroll area (to make an input). This is distracting and time consuming. 42 IL a- TABLE 8 STEP 4 - PROTOTYPE Ge tNextEvent MouseClick

  16. CTViz: A tool for the visualization of transport in nanocomposites.

    PubMed

    Beach, Benjamin; Brown, Joshua; Tarlton, Taylor; Derosa, Pedro A

    2016-05-01

    A visualization tool (CTViz) for charge transport processes in 3-D hybrid materials (nanocomposites) was developed, inspired by the need for a graphical application to assist in code debugging and data presentation of an existing in-house code. As the simulation code grew, troubleshooting problems grew increasingly difficult without an effective way to visualize 3-D samples and charge transport in those samples. CTViz is able to produce publication and presentation quality visuals of the simulation box, as well as static and animated visuals of the paths of individual carriers through the sample. CTViz was designed to provide a high degree of flexibility in the visualization of the data. A feature that characterizes this tool is the use of shade and transparency levels to highlight important details in the morphology or in the transport paths by hiding or dimming elements of little relevance to the current view. This is fundamental for the visualization of 3-D systems with complex structures. The code presented here provides these required capabilities, but has gone beyond the original design and could be used as is or easily adapted for the visualization of other particulate transport where transport occurs on discrete paths. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. JDet: interactive calculation and visualization of function-related conservation patterns in multiple sequence alignments and structures.

    PubMed

    Muth, Thilo; García-Martín, Juan A; Rausell, Antonio; Juan, David; Valencia, Alfonso; Pazos, Florencio

    2012-02-15

    We have implemented in a single package all the features required for extracting, visualizing and manipulating fully conserved positions as well as those with a family-dependent conservation pattern in multiple sequence alignments. The program allows, among other things, to run different methods for extracting these positions, combine the results and visualize them in protein 3D structures and sequence spaces. JDet is a multiplatform application written in Java. It is freely available, including the source code, at http://csbg.cnb.csic.es/JDet. The package includes two of our recently developed programs for detecting functional positions in protein alignments (Xdet and S3Det), and support for other methods can be added as plug-ins. A help file and a guided tutorial for JDet are also available.

  18. BioCichlid: central dogma-based 3D visualization system of time-course microarray data on a hierarchical biological network.

    PubMed

    Ishiwata, Ryosuke R; Morioka, Masaki S; Ogishima, Soichi; Tanaka, Hiroshi

    2009-02-15

    BioCichlid is a 3D visualization system of time-course microarray data on molecular networks, aiming at interpretation of gene expression data by transcriptional relationships based on the central dogma with physical and genetic interactions. BioCichlid visualizes both physical (protein) and genetic (regulatory) network layers, and provides animation of time-course gene expression data on the genetic network layer. Transcriptional regulations are represented to bridge the physical network (transcription factors) and genetic network (regulated genes) layers, thus integrating promoter analysis into the pathway mapping. BioCichlid enhances the interpretation of microarray data and allows for revealing the underlying mechanisms causing differential gene expressions. BioCichlid is freely available and can be accessed at http://newton.tmd.ac.jp/. Source codes for both biocichlid server and client are also available.

  19. NanoPack: visualizing and processing long read sequencing data.

    PubMed

    De Coster, Wouter; D'Hert, Svenn; Schultz, Darrin T; Cruts, Marc; Van Broeckhoven, Christine

    2018-03-14

    Here we describe NanoPack, a set of tools developed for visualization and processing of long read sequencing data from Oxford Nanopore Technologies and Pacific Biosciences. The NanoPack tools are written in Python3 and released under the GNU GPL3.0 License. The source code can be found at https://github.com/wdecoster/nanopack, together with links to separate scripts and their documentation. The scripts are compatible with Linux, Mac OS and the MS Windows 10 subsystem for Linux and are available as a graphical user interface, a web service at http://nanoplot.bioinf.be and command line tools. wouter.decoster@molgen.vib-ua.be. Supplementary tables and figures are available at Bioinformatics online.

  20. Processing of Visual--Action Codes by Deaf and Hearing Children: Coding Orientation or "M"-Capacity?

    ERIC Educational Resources Information Center

    Todman, John; Cowdy, Natascha

    1993-01-01

    Results from a study in which 25 deaf children and 25 hearing children completed a vocabulary test and a compound stimulus visual information task support the hypothesis that performance on cognitive tasks is dependent on compatibility of task demands with a coding orientation. (SLD)

  1. Short-Term Memory Coding in Children with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Henry, Lucy

    2008-01-01

    To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and…

  2. Audio-visual speech cue combination.

    PubMed

    Arnold, Derek H; Tear, Morgan; Schindel, Ryan; Roseboom, Warrick

    2010-04-16

    Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process. Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation. Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.

  3. Using Embedded Visual Coding to Support Contextualization of Historical Texts

    ERIC Educational Resources Information Center

    Baron, Christine

    2016-01-01

    This mixed-method study examines the think-aloud protocols of 48 randomly assigned undergraduate students to understand what effect embedding a visual coding system, based on reliable visual cues for establishing historical time period, would have on novice history students' ability to contextualize historic documents. Results indicate that using…

  4. Dual Coding in Children.

    ERIC Educational Resources Information Center

    Burton, John K.; Wildman, Terry M.

    The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…

  5. Verbal-spatial and visuospatial coding of power-space interactions.

    PubMed

    Dai, Qiang; Zhu, Lei

    2018-05-10

    A power-space interaction, which denotes the phenomenon that people responded faster to powerful words when they are placed higher in a visual field and faster to powerless words when they are placed lower in a visual field, has been repeatedly found. The dominant explanation of this power-space interaction is that it results from a tight correspondence between the representation of power and visual space (i.e., a visuospatial coding account). In the present study, we demonstrated that the interaction between power and space could be also based on a verbal-spatial coding in absence of any vertical spatial information. Additionally, the verbal-spatial coding was dominant in driving the power-space interaction when verbal space was contrasted with the visual space. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Frequency spectrum might act as communication code between retina and visual cortex I

    PubMed Central

    Yang, Xu; Gong, Bo; Lu, Jian-Wei

    2015-01-01

    AIM To explore changes and possible communication relationship of local potential signals recorded simultaneously from retina and visual cortex I (V1). METHODS Fourteen C57BL/6J mice were measured with pattern electroretinogram (PERG) and pattern visually evoked potential (PVEP) and fast Fourier transform has been used to analyze the frequency components of those signals. RESULTS The amplitude of PERG and PVEP was measured at about 36.7 µV and 112.5 µV respectively and the dominant frequency of PERG and PVEP, however, stay unchanged and both signals do not have second, or otherwise, harmonic generation. CONCLUSION The results suggested that retina encodes visual information in the way of frequency spectrum and then transfers it to primary visual cortex. The primary visual cortex accepts and deciphers the input visual information coded from retina. Frequency spectrum may act as communication code between retina and V1. PMID:26682156

  7. Frequency spectrum might act as communication code between retina and visual cortex I.

    PubMed

    Yang, Xu; Gong, Bo; Lu, Jian-Wei

    2015-01-01

    To explore changes and possible communication relationship of local potential signals recorded simultaneously from retina and visual cortex I (V1). Fourteen C57BL/6J mice were measured with pattern electroretinogram (PERG) and pattern visually evoked potential (PVEP) and fast Fourier transform has been used to analyze the frequency components of those signals. The amplitude of PERG and PVEP was measured at about 36.7 µV and 112.5 µV respectively and the dominant frequency of PERG and PVEP, however, stay unchanged and both signals do not have second, or otherwise, harmonic generation. The results suggested that retina encodes visual information in the way of frequency spectrum and then transfers it to primary visual cortex. The primary visual cortex accepts and deciphers the input visual information coded from retina. Frequency spectrum may act as communication code between retina and V1.

  8. Using Rose and Compass for Authentication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, G

    2009-07-09

    Many recent non-proliferation software projects include a software authentication component. In this context, 'authentication' is defined as determining that a software package performs only its intended purpose and performs that purpose correctly and reliably over many years. In addition to visual inspection by knowledgeable computer scientists, automated tools are needed to highlight suspicious code constructs both to aid the visual inspection and to guide program development. While many commercial tools are available for portions of the authentication task, they are proprietary, and have limited extensibility. An open-source, extensible tool can be customized to the unique needs of each project. ROSEmore » is an LLNL-developed robust source-to-source analysis and optimization infrastructure currently addressing large, million-line DOE applications in C, C++, and FORTRAN. It continues to be extended to support the automated analysis of binaries (x86, ARM, and PowerPC). We continue to extend ROSE to address a number of security specific requirements and apply it to software authentication for non-proliferation projects. We will give an update on the status of our work.« less

  9. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey.

    PubMed

    Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2015-10-01

    A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual-motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. © The Author 2014. Published by Oxford University Press.

  10. Attention enhances multi-voxel representation of novel objects in frontal, parietal and visual cortices.

    PubMed

    Woolgar, Alexandra; Williams, Mark A; Rich, Anina N

    2015-04-01

    Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Mars @ ASDC

    NASA Astrophysics Data System (ADS)

    Carraro, Francesco

    "Mars @ ASDC" is a project born with the goal of using the new web technologies to assist researches involved in the study of Mars. This project employs Mars map and javascript APIs provided by Google to visualize data acquired by space missions on the planet. So far, visualization of tracks acquired by MARSIS and regions observed by VIRTIS-Rosetta has been implemented. The main reason for the creation of this kind of tool is the difficulty in handling hundreds or thousands of acquisitions, like the ones from MARSIS, and the consequent difficulty in finding observations related to a particular region. This led to the development of a tool which allows to search for acquisitions either by defining the region of interest through a set of geometrical parameters or by manually selecting the region on the map through a few mouse clicks The system allows the visualization of tracks (acquired by MARSIS) or regions (acquired by VIRTIS-Rosetta) which intersect the user defined region. MARSIS tracks can be visualized both in Mercator and polar projections while the regions observed by VIRTIS can presently be visualized only in Mercator projection. The Mercator projection is the standard map provided by Google. The polar projections are provided by NASA and have been developed to be used in combination with APIs provided by Google The whole project has been developed following the "open source" philosophy: the client-side code which handles the functioning of the web page is written in javascript; the server-side code which executes the searches for tracks or regions is written in PHP and the DB which undergoes the system is MySQL.

  12. Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays

    ERIC Educational Resources Information Center

    Yamani, Yusuke; McCarley, Jason S.

    2010-01-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.…

  13. A Cloud-based, Open-Source, Command-and-Control Software Paradigm for Space Situational Awareness (SSA)

    NASA Astrophysics Data System (ADS)

    Melton, R.; Thomas, J.

    With the rapid growth in the number of space actors, there has been a marked increase in the complexity and diversity of software systems utilized to support SSA target tracking, indication, warning, and collision avoidance. Historically, most SSA software has been constructed with "closed" proprietary code, which limits interoperability, inhibits the code transparency that some SSA customers need to develop domain expertise, and prevents the rapid injection of innovative concepts into these systems. Open-source aerospace software, a rapidly emerging, alternative trend in code development, is based on open collaboration, which has the potential to bring greater transparency, interoperability, flexibility, and reduced development costs. Open-source software is easily adaptable, geared to rapidly changing mission needs, and can generally be delivered at lower costs to meet mission requirements. This paper outlines Ball's COSMOS C2 system, a fully open-source, web-enabled, command-and-control software architecture which provides several unique capabilities to move the current legacy SSA software paradigm to an open source model that effectively enables pre- and post-launch asset command and control. Among the unique characteristics of COSMOS is the ease with which it can integrate with diverse hardware. This characteristic enables COSMOS to serve as the command-and-control platform for the full life-cycle development of SSA assets, from board test, to box test, to system integration and test, to on-orbit operations. The use of a modern scripting language, Ruby, also permits automated procedures to provide highly complex decision making for the tasking of SSA assets based on both telemetry data and data received from outside sources. Detailed logging enables quick anomaly detection and resolution. Integrated real-time and offline data graphing renders the visualization of the both ground and on-orbit assets simple and straightforward.

  14. Visual-area coding technique (VACT): optical parallel implementation of fuzzy logic and its visualization with the digital-halftoning process

    NASA Astrophysics Data System (ADS)

    Konishi, Tsuyoshi; Tanida, Jun; Ichioka, Yoshiki

    1995-06-01

    A novel technique, the visual-area coding technique (VACT), for the optical implementation of fuzzy logic with the capability of visualization of the results is presented. This technique is based on the microfont method and is considered to be an instance of digitized analog optical computing. Huge amounts of data can be processed in fuzzy logic with the VACT. In addition, real-time visualization of the processed result can be accomplished.

  15. Math Description Engine Software Development Kit

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.; Smith, Stephanie L.; Dexter, Dan E.; Hodgson, Terry R.

    2010-01-01

    The Math Description Engine Software Development Kit (MDE SDK) can be used by software developers to make computer-rendered graphs more accessible to blind and visually-impaired users. The MDE SDK generates alternative graph descriptions in two forms: textual descriptions and non-verbal sound renderings, or sonification. It also enables display of an animated trace of a graph sonification on a visual graph component, with color and line-thickness options for users having low vision or color-related impairments. A set of accessible graphical user interface widgets is provided for operation by end users and for control of accessible graph displays. Version 1.0 of the MDE SDK generates text descriptions for 2D graphs commonly seen in math and science curriculum (and practice). The mathematically rich text descriptions can also serve as a virtual math and science assistant for blind and sighted users, making graphs more accessible for everyone. The MDE SDK has a simple application programming interface (API) that makes it easy for programmers and Web-site developers to make graphs accessible with just a few lines of code. The source code is written in Java for cross-platform compatibility and to take advantage of Java s built-in support for building accessible software application interfaces. Compiled-library and NASA Open Source versions are available with API documentation and Programmer s Guide at http:/ / prim e.jsc.n asa. gov.

  16. Azahar: a PyMOL plugin for construction, visualization and analysis of glycan molecules

    NASA Astrophysics Data System (ADS)

    Arroyuelo, Agustina; Vila, Jorge A.; Martin, Osvaldo A.

    2016-08-01

    Glycans are key molecules in many physiological and pathological processes. As with other molecules, like proteins, visualization of the 3D structures of glycans adds valuable information for understanding their biological function. Hence, here we introduce Azahar, a computing environment for the creation, visualization and analysis of glycan molecules. Azahar is implemented in Python and works as a plugin for the well known PyMOL package (Schrodinger in The PyMOL molecular graphics system, version 1.3r1, 2010). Besides the already available visualization and analysis options provided by PyMOL, Azahar includes 3 cartoon-like representations and tools for 3D structure caracterization such as a comformational search using a Monte Carlo with minimization routine and also tools to analyse single glycans or trajectories/ensembles including the calculation of radius of gyration, Ramachandran plots and hydrogen bonds. Azahar is freely available to download from http://www.pymolwiki.org/index.php/Azahar and the source code is available at https://github.com/agustinaarroyuelo/Azahar.

  17. OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts.

    PubMed

    Ravagli, Carlo; Pognan, Francois; Marc, Philippe

    2017-01-01

    The lack of controlled terminology and ontology usage leads to incomplete search results and poor interoperability between databases. One of the major underlying challenges of data integration is curating data to adhere to controlled terminologies and/or ontologies. Finding subject matter experts with the time and skills required to perform data curation is often problematic. In addition, existing tools are not designed for continuous data integration and collaborative curation. This results in time-consuming curation workflows that often become unsustainable. The primary objective of OntoBrowser is to provide an easy-to-use online collaborative solution for subject matter experts to map reported terms to preferred ontology (or code list) terms and facilitate ontology evolution. Additional features include web service access to data, visualization of ontologies in hierarchical/graph format and a peer review/approval workflow with alerting. The source code is freely available under the Apache v2.0 license. Source code and installation instructions are available at http://opensource.nibr.com This software is designed to run on a Java EE application server and store data in a relational database. philippe.marc@novartis.com. © The Author 2016. Published by Oxford University Press.

  18. OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts

    PubMed Central

    Ravagli, Carlo; Pognan, Francois

    2017-01-01

    Summary: The lack of controlled terminology and ontology usage leads to incomplete search results and poor interoperability between databases. One of the major underlying challenges of data integration is curating data to adhere to controlled terminologies and/or ontologies. Finding subject matter experts with the time and skills required to perform data curation is often problematic. In addition, existing tools are not designed for continuous data integration and collaborative curation. This results in time-consuming curation workflows that often become unsustainable. The primary objective of OntoBrowser is to provide an easy-to-use online collaborative solution for subject matter experts to map reported terms to preferred ontology (or code list) terms and facilitate ontology evolution. Additional features include web service access to data, visualization of ontologies in hierarchical/graph format and a peer review/approval workflow with alerting. Availability and implementation: The source code is freely available under the Apache v2.0 license. Source code and installation instructions are available at http://opensource.nibr.com. This software is designed to run on a Java EE application server and store data in a relational database. Contact: philippe.marc@novartis.com PMID:27605099

  19. RVA: A Plugin for ParaView 3.14

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-04

    RVA is a plugin developed for the 64-bit Windows version of the ParaView 3.14 visualization package. RVA is designed to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed onmore » enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less

  20. Game On, Science - How Video Game Technology May Help Biologists Tackle Visualization Challenges

    PubMed Central

    Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc

    2013-01-01

    The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/. PMID:23483961

  1. Game on, science - how video game technology may help biologists tackle visualization challenges.

    PubMed

    Lv, Zhihan; Tek, Alex; Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc

    2013-01-01

    The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/.

  2. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  3. Delta: a new web-based 3D genome visualization and analysis platform.

    PubMed

    Tang, Bixia; Li, Feifei; Li, Jing; Zhao, Wenming; Zhang, Zhihua

    2018-04-15

    Delta is an integrative visualization and analysis platform to facilitate visually annotating and exploring the 3D physical architecture of genomes. Delta takes Hi-C or ChIA-PET contact matrix as input and predicts the topologically associating domains and chromatin loops in the genome. It then generates a physical 3D model which represents the plausible consensus 3D structure of the genome. Delta features a highly interactive visualization tool which enhances the integration of genome topology/physical structure with extensive genome annotation by juxtaposing the 3D model with diverse genomic assay outputs. Finally, by visually comparing the 3D model of the β-globin gene locus and its annotation, we speculated a plausible transitory interaction pattern in the locus. Experimental evidence was found to support this speculation by literature survey. This served as an example of intuitive hypothesis testing with the help of Delta. Delta is freely accessible from http://delta.big.ac.cn, and the source code is available at https://github.com/zhangzhwlab/delta. zhangzhihua@big.ac.cn. Supplementary data are available at Bioinformatics online.

  4. Visualizing Gyrokinetic Turbulence in a Tokamak

    NASA Astrophysics Data System (ADS)

    Stantchev, George

    2005-10-01

    Multi-dimensional data output from gyrokinetic microturbulence codes are often difficult to visualize, in part due to the non-trivial geometry of the underlying grids, in part due to high irregularity of the relevant scalar field structures in turbulent regions. For instance, traditional isosurface extraction methods are likely to fail for the electrostatic potential field whose level sets may exhibit various geometric pathologies. To address these issues we develop an advanced interactive 3D gyrokinetic turbulence visualization framework which we apply in the study of microtearing instabilities calculated with GS2 in the MAST and NSTX geometries. In these simulations GS2 uses field-line-following coordinates such that the computational domain maps in physical space to a long, twisting flux tube with strong cross-sectional shear. Using statistical wavelet analysis we create a sparse multiple-scale volumetric representation of the relevant scalar fields, which we visualize via a variation of the so called splatting technique. To handle the problem of highly anisotropic flux tube configurations we adapt a geometry-driven surface illumination algorithm that places local light sources for effective feature-enhanced visualization.

  5. Message Into Medium: An Extension of the Dual Coding Hypothesis.

    ERIC Educational Resources Information Center

    Simpson, Timothy J.

    This paper examines the dual coding hypothesis, a model of the coding of visual and textual information, from the perspective of a mass media professional, such as a teacher, interested in accurately presenting both visual and textual material to a mass audience (i.e., students). It offers an extension to the theory, based upon the various skill…

  6. SplicePlot: a utility for visualizing splicing quantitative trait loci.

    PubMed

    Wu, Eric; Nance, Tracy; Montgomery, Stephen B

    2014-04-01

    RNA sequencing has provided unprecedented resolution of alternative splicing and splicing quantitative trait loci (sQTL). However, there are few tools available for visualizing the genotype-dependent effects of splicing at a population level. SplicePlot is a simple command line utility that produces intuitive visualization of sQTLs and their effects. SplicePlot takes mapped RNA sequencing reads in BAM format and genotype data in VCF format as input and outputs publication-quality Sashimi plots, hive plots and structure plots, enabling better investigation and understanding of the role of genetics on alternative splicing and transcript structure. Source code and detailed documentation are available at http://montgomerylab.stanford.edu/spliceplot/index.html under Resources and at Github. SplicePlot is implemented in Python and is supported on Linux and Mac OS. A VirtualBox virtual machine running Ubuntu with SplicePlot already installed is also available.

  7. chimeraviz: a tool for visualizing chimeric RNA.

    PubMed

    Lågstad, Stian; Zhao, Sen; Hoff, Andreas M; Johannessen, Bjarne; Lingjærde, Ole Christian; Skotheim, Rolf I

    2017-09-15

    Advances in high-throughput RNA sequencing have enabled more efficient detection of fusion transcripts, but the technology and associated software used for fusion detection from sequencing data often yield a high false discovery rate. Good prioritization of the results is important, and this can be helped by a visualization framework that automatically integrates RNA data with known genomic features. Here we present chimeraviz , a Bioconductor package that automates the creation of chimeric RNA visualizations. The package supports input from nine different fusion-finder tools: deFuse, EricScript, InFusion, JAFFA, FusionCatcher, FusionMap, PRADA, SOAPfuse and STAR-FUSION. chimeraviz is an R package available via Bioconductor ( https://bioconductor.org/packages/release/bioc/html/chimeraviz.html ) under Artistic-2.0. Source code and support is available at GitHub ( https://github.com/stianlagstad/chimeraviz ). rolf.i.skotheim@rr-research.no. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  8. PROVAT: a tool for Voronoi tessellation analysis of protein structures and complexes.

    PubMed

    Gore, Swanand P; Burke, David F; Blundell, Tom L

    2005-08-01

    Voronoi tessellation has proved to be a useful tool in protein structure analysis. We have developed PROVAT, a versatile public domain software that enables computation and visualization of Voronoi tessellations of proteins and protein complexes. It is a set of Python scripts that integrate freely available specialized software (Qhull, Pymol etc.) into a pipeline. The calculation component of the tool computes Voronoi tessellation of a given protein system in a way described by a user-supplied XML recipe and stores resulting neighbourhood information as text files with various styles. The Python pickle file generated in the process is used by the visualization component, a Pymol plug-in, that offers a GUI to explore the tessellation visually. PROVAT source code can be downloaded from http://raven.bioc.cam.ac.uk/~swanand/Provat1, which also provides a webserver for its calculation component, documentation and examples.

  9. Spatial Coding of Individuals with Visual Impairments

    ERIC Educational Resources Information Center

    Papadopoulos, Konstantinos; Koustriava, Eleni; Kartasidou, Lefkothea

    2012-01-01

    The aim of this study is to examine the ability of children and adolescents with visual impairments to code and represent near space. Moreover, it examines the impact of the strategies they use and individual differences in their performance. A total of 30 individuals with visual impairments up to the age of 18 were given eight different object…

  10. A graphical user interface for RAId, a knowledge integrated proteomics analysis suite with accurate statistics.

    PubMed

    Joyce, Brendan; Lee, Danny; Rubio, Alex; Ogurtsov, Aleksey; Alves, Gelio; Yu, Yi-Kuo

    2018-03-15

    RAId is a software package that has been actively developed for the past 10 years for computationally and visually analyzing MS/MS data. Founded on rigorous statistical methods, RAId's core program computes accurate E-values for peptides and proteins identified during database searches. Making this robust tool readily accessible for the proteomics community by developing a graphical user interface (GUI) is our main goal here. We have constructed a graphical user interface to facilitate the use of RAId on users' local machines. Written in Java, RAId_GUI not only makes easy executions of RAId but also provides tools for data/spectra visualization, MS-product analysis, molecular isotopic distribution analysis, and graphing the retrieval versus the proportion of false discoveries. The results viewer displays and allows the users to download the analyses results. Both the knowledge-integrated organismal databases and the code package (containing source code, the graphical user interface, and a user manual) are available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads/raid.html .

  11. WorldWide Telescope: A Newly Open Source Astronomy Visualization System

    NASA Astrophysics Data System (ADS)

    Fay, Jonathan; Roberts, Douglas A.

    2016-01-01

    After eight years of development by Microsoft Research, WorldWide Telescope (WWT) was made an open source project at the end of June 2015. WWT was motivated by the desire to put new surveys of objects, such as the Sloan Digital Sky Survey in the context of the night sky. The development of WWT under Microsoft started with the creation of a Windows desktop client that is widely used in various education, outreach and research projects. Using this, users can explore the data built into WWT as well as data that is loaded in. Beyond exploration, WWT can be used to create tours that present various datasets a narrative format.In the past two years, the team developed a collection of web controls, including an HTML5 web client, which contains much of the functionality of the Windows desktop client. The project under Microsoft has deep connections with several user communities such as education through the WWT Ambassadors program, http://wwtambassadors.org/ and with planetariums and museums such as the Adler Planetarium. WWT can also support research, including using WWT to visualize the Bones of the Milky Way and rich connections between WWT and the Astrophysical Data Systems (ADS, http://labs.adsabs.harvard.edu/adsabs/). One important new research connection is the use of WWT to create dynamic and potentially interactive supplements to journal articles, which have been created in 2015.Now WWT is an open source community lead project. The source code is available in GitHub (https://github.com/WorldWideTelescope). There is significant developer documentation on the website (http://worldwidetelescope.org/Developers/) and an extensive developer workshops (http://wwtworkshops.org/?tribe_events=wwt-developer-workshop) has taken place in the fall of 2015.Now that WWT is open source anyone who has the interest in the project can be a contributor. As important as helping out with coding, the project needs people interested in documentation, testing, training and other roles.

  12. Fisher Matrix Preloaded — FISHER4CAST

    NASA Astrophysics Data System (ADS)

    Bassett, Bruce A.; Fantaye, Yabebal; Hlozek, Renée; Kotze, Jacques

    The Fisher Matrix is the backbone of modern cosmological forecasting. We describe the Fisher4Cast software: A general-purpose, easy-to-use, Fisher Matrix framework. It is open source, rigorously designed and tested and includes a Graphical User Interface (GUI) with automated LATEX file creation capability and point-and-click Fisher ellipse generation. Fisher4Cast was designed for ease of extension and, although written in Matlab, is easily portable to open-source alternatives such as Octave and Scilab. Here we use Fisher4Cast to present new 3D and 4D visualizations of the forecasting landscape and to investigate the effects of growth and curvature on future cosmological surveys. Early releases have been available at since mid-2008. The current release of the code is Version 2.2 which is described here. For ease of reference a Quick Start guide and the code used to produce the figures in this paper are included, in the hope that it will be useful to the cosmology and wider scientific communities.

  13. An electrocorticographic BCI using code-based VEP for control in video applications: a single-subject study

    PubMed Central

    Kapeller, Christoph; Kamada, Kyousuke; Ogawa, Hiroshi; Prueckl, Robert; Scharinger, Josef; Guger, Christoph

    2014-01-01

    A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed. PMID:25147509

  14. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  15. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream

    PubMed Central

    Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY

    2018-01-01

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853

  16. Vapor-screen technique for flow visualization in the Langley Unitary Plan Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Morris, O. A.; Corlett, W. A.; Wassum, D. L.; Babb, C. D.

    1985-01-01

    The vapor-screen technique for flow visualization, as developed for the Langley Unitary Plan Wind Tunnel, is described with evaluations of light sources and photographic equipment. Test parameters including dew point, pressure, and temperature were varied to determine optimum conditions for obtaining high-quality vapor-screen photographs. The investigation was conducted in the supersonic speed range for Mach numbers from 1.47 to 4.63 at model angles of attack up to 35 deg. Vapor-screen photographs illustrating various flow patterns are presented for several missile and aircraft configurations. Examples of vapor-screen results that have contributed to the understanding of complex flow fields and provided a basis for the development of theoretical codes are presented with reference to other research.

  17. Network of anatomical texts (NAnaTex), an open-source project for visualizing the interaction between anatomical terms.

    PubMed

    Momota, Ryusuke; Ohtsuka, Aiji

    2018-01-01

    Anatomy is the science and art of understanding the structure of the body and its components in relation to the functions of the whole-body system. Medicine is based on a deep understanding of anatomy, but quite a few introductory-level learners are overwhelmed by the sheer amount of anatomical terminology that must be understood, so they regard anatomy as a dull and dense subject. To help them learn anatomical terms in a more contextual way, we started a new open-source project, the Network of Anatomical Texts (NAnaTex), which visualizes relationships of body components by integrating text-based anatomical information using Cytoscape, a network visualization software platform. Here, we present a network of bones and muscles produced from literature descriptions. As this network is primarily text-based and does not require any programming knowledge, it is easy to implement new functions or provide extra information by making changes to the original text files. To facilitate collaborations, we deposited the source code files for the network into the GitHub repository ( https://github.com/ryusukemomota/nanatex ) so that anybody can participate in the evolution of the network and use it for their own non-profit purposes. This project should help not only introductory-level learners but also professional medical practitioners, who could use it as a quick reference.

  18. Acquired Codes of Meaning in Data Visualization and Infographics: Beyond Perceptual Primitives.

    PubMed

    Byrne, Lydia; Angus, Daniel; Wiles, Janet

    2016-01-01

    While information visualization frameworks and heuristics have traditionally been reluctant to include acquired codes of meaning, designers are making use of them in a wide variety of ways. Acquired codes leverage a user's experience to understand the meaning of a visualization. They range from figurative visualizations which rely on the reader's recognition of shapes, to conventional arrangements of graphic elements which represent particular subjects. In this study, we used content analysis to codify acquired meaning in visualization. We applied the content analysis to a set of infographics and data visualizations which are exemplars of innovative and effective design. 88% of the infographics and 71% of data visualizations in the sample contain at least one use of figurative visualization. Conventions on the arrangement of graphics are also widespread in the sample. In particular, a comparison of representations of time and other quantitative data showed that conventions can be specific to a subject. These results suggest that there is a need for information visualization research to expand its scope beyond perceptual channels, to include social and culturally constructed meaning. Our paper demonstrates a viable method for identifying figurative techniques and graphic conventions and integrating them into heuristics for visualization design.

  19. Generic Space Science Visualization in 2D/3D using SDDAS

    NASA Astrophysics Data System (ADS)

    Mukherjee, J.; Murphy, Z. B.; Gonzalez, C. A.; Muller, M.; Ybarra, S.

    2017-12-01

    The Southwest Data Display and Analysis System (SDDAS) is a flexible multi-mission / multi-instrument software system intended to support space physics data analysis, and has been in active development for over 20 years. For the Magnetospheric Multi-Scale (MMS), Juno, Cluster, and Mars Express missions, we have modified these generic tools for visualizing data in two and three dimensions. The SDDAS software is open source and makes use of various other open source packages, including VTK and Qwt. The software offers interactive plotting as well as a Python and Lua module to modify the data before plotting. In theory, by writing a Lua or Python module to read the data, any data could be used. Currently, the software can natively read data in IDFS, CEF, CDF, FITS, SEG-Y, ASCII, and XLS formats. We have integrated the software with other Python packages such as SPICE and SpacePy. Included with the visualization software is a database application and other utilities for managing data that can retrieve data from the Cluster Active Archive and Space Physics Data Facility at Goddard, as well as other local archives. Line plots, spectrograms, geographic, volume plots, strip charts, etc. are just some of the types of plots one can generate with SDDAS. Furthermore, due to the design, output is not limited to strictly visualization as SDDAS can also be used to generate stand-alone IDL or Python visualization code.. Lastly, SDDAS has been successfully used as a backend for several web based analysis systems as well.

  20. Porcupine: A visual pipeline tool for neuroimaging analysis

    PubMed Central

    Snoek, Lukas; Knapen, Tomas

    2018-01-01

    The field of neuroimaging is rapidly adopting a more reproducible approach to data acquisition and analysis. Data structures and formats are being standardised and data analyses are getting more automated. However, as data analysis becomes more complicated, researchers often have to write longer analysis scripts, spanning different tools across multiple programming languages. This makes it more difficult to share or recreate code, reducing the reproducibility of the analysis. We present a tool, Porcupine, that constructs one’s analysis visually and automatically produces analysis code. The graphical representation improves understanding of the performed analysis, while retaining the flexibility of modifying the produced code manually to custom needs. Not only does Porcupine produce the analysis code, it also creates a shareable environment for running the code in the form of a Docker image. Together, this forms a reproducible way of constructing, visualising and sharing one’s analysis. Currently, Porcupine links to Nipype functionalities, which in turn accesses most standard neuroimaging analysis tools. Our goal is to release researchers from the constraints of specific implementation details, thereby freeing them to think about novel and creative ways to solve a given problem. Porcupine improves the overview researchers have of their processing pipelines, and facilitates both the development and communication of their work. This will reduce the threshold at which less expert users can generate reusable pipelines. With Porcupine, we bridge the gap between a conceptual and an implementational level of analysis and make it easier for researchers to create reproducible and shareable science. We provide a wide range of examples and documentation, as well as installer files for all platforms on our website: https://timvanmourik.github.io/Porcupine. Porcupine is free, open source, and released under the GNU General Public License v3.0. PMID:29746461

  1. Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery

    NASA Astrophysics Data System (ADS)

    Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.

    2017-05-01

    In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.

  2. mRNAs coding for neurotransmitter receptors and voltage-gated sodium channels in the adult rabbit visual cortex after monocular deafferentiation

    PubMed Central

    Nguyen, Quoc-Thang; Matute, Carlos; Miledi, Ricardo

    1998-01-01

    It has been postulated that, in the adult visual cortex, visual inputs modulate levels of mRNAs coding for neurotransmitter receptors in an activity-dependent manner. To investigate this possibility, we performed a monocular enucleation in adult rabbits and, 15 days later, collected their left and right visual cortices. Levels of mRNAs coding for voltage-activated sodium channels, and for receptors for kainate/α-amino-3-hydroxy-5-methylisoxazole-4-propionic acid (AMPA), N-methyl-d-aspartate (NMDA), γ-aminobutyric acid (GABA), and glycine were semiquantitatively estimated in the visual cortices ipsilateral and contralateral to the lesion by the Xenopus oocyte/voltage-clamp expression system. This technique also allowed us to study some of the pharmacological and physiological properties of the channels and receptors expressed in the oocytes. In cells injected with mRNA from left or right cortices of monocularly enucleated and control animals, the amplitudes of currents elicited by kainate or AMPA, which reflect the abundance of mRNAs coding for kainate and AMPA receptors, were similar. There was no difference in the sensitivity to kainate and in the voltage dependence of the kainate response. Responses mediated by NMDA, GABA, and glycine were unaffected by monocular enucleation. Sodium channel peak currents, activation, steady-state inactivation, and sensitivity to tetrodotoxin also remained unchanged after the enucleation. Our data show that mRNAs for major neurotransmitter receptors and ion channels in the adult rabbit visual cortex are not obviously modified by monocular deafferentiation. Thus, our results do not support the idea of a widespread dynamic modulation of mRNAs coding for receptors and ion channels by visual activity in the rabbit visual system. PMID:9501250

  3. HydroDesktop as a Community Designed and Developed Resource for Hydrologic Data Discovery and Analysis

    NASA Astrophysics Data System (ADS)

    Ames, D. P.

    2013-12-01

    As has been seen in other informatics fields, well-documented and appropriately licensed open source software tools have the potential to significantly increase both opportunities and motivation for inter-institutional science and technology collaboration. The CUAHSI HIS (and related HydroShare) projects have aimed to foster such activities in hydrology resulting in the development of many useful community software components including the HydroDesktop software application. HydroDesktop is an open source, GIS-based, scriptable software application for discovering data on the CUAHSI Hydrologic Information System and related resources. It includes a well-defined plugin architecture and interface to allow 3rd party developers to create extensions and add new functionality without requiring recompiling of the full source code. HydroDesktop is built in the C# programming language and uses the open source DotSpatial GIS engine for spatial data management. Capabilities include data search, discovery, download, visualization, and export. An extension that integrates the R programming language with HydroDesktop provides scripting and data automation capabilities and an OpenMI plugin provides the ability to link models. Current revision and updates to HydroDesktop include migration of core business logic to cross platform, scriptable Python code modules that can be executed in any operating system or linked into other software front-end applications.

  4. Visual Dysfunction Following Blast-Related Traumatic Brain Injury from the Battlefield

    DTIC Science & Technology

    2010-10-27

    sequelae follow- ing a TBI [12, 13]. The occurrence of TBI-related ocular and visual disorders is varied, depending on the diagnostic criteria...measure, ocular/visual disor- der, was indicated by the ICD-9-CM diagnostic codes for ‘disorders of the eye and adnexa’ (360.0– 379.9) obtained from...II. Number and percentage of US service members in each ocular/visual disorder diagnostic category by TBI status. ICD-9-CM code and categorya TBI (n

  5. Influence of audio triggered emotional attention on video perception

    NASA Astrophysics Data System (ADS)

    Torres, Freddy; Kalva, Hari

    2014-02-01

    Perceptual video coding methods attempt to improve compression efficiency by discarding visual information not perceived by end users. Most of the current approaches for perceptual video coding only use visual features ignoring the auditory component. Many psychophysical studies have demonstrated that auditory stimuli affects our visual perception. In this paper we present our study of audio triggered emotional attention and it's applicability to perceptual video coding. Experiments with movie clips show that the reaction time to detect video compression artifacts was longer when video was presented with the audio information. The results reported are statistically significant with p=0.024.

  6. Visual–Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey

    PubMed Central

    Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P.; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2015-01-01

    A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. PMID:25491118

  7. Preparation in and Use of the Nemeth Braille Code for Mathematics by Teachers of Students with Visual Impairments

    ERIC Educational Resources Information Center

    Rosenblum, L. Penny; Amato, Sheila

    2004-01-01

    This study examined the preparation in and use of the Nemeth braille code by 135 teachers of students with visual impairments. Almost all the teachers had taken at least one course in the Nemeth code as part of their university preparation. In their current jobs, they prepared a variety of materials, primarily basic operations, word problems,…

  8. FPV: fast protein visualization using Java 3D.

    PubMed

    Can, Tolga; Wang, Yujun; Wang, Yuan-Fang; Su, Jianwen

    2003-05-22

    Many tools have been developed to visualize protein structures. Tools that have been based on Java 3D((TM)) are compatible among different systems and they can be run remotely through web browsers. However, using Java 3D for visualization has some performance issues with it. The primary concerns about molecular visualization tools based on Java 3D are in their being slow in terms of interaction speed and in their inability to load large molecules. This behavior is especially apparent when the number of atoms to be displayed is huge, or when several proteins are to be displayed simultaneously for comparison. In this paper we present techniques for organizing a Java 3D scene graph to tackle these problems. We have developed a protein visualization system based on Java 3D and these techniques. We demonstrate the effectiveness of the proposed method by comparing the visualization component of our system with two other Java 3D based molecular visualization tools. In particular, for van der Waals display mode, with the efficient organization of the scene graph, we could achieve up to eight times improvement in rendering speed and could load molecules three times as large as the previous systems could. EPV is freely available with source code at the following URL: http://www.cs.ucsb.edu/~tcan/fpv/

  9. PRay - A graphical user interface for interactive visualization and modification of rayinvr models

    NASA Astrophysics Data System (ADS)

    Fromm, T.

    2016-01-01

    PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development (https://sourceforge.net/projects/pray-plot-rayinvr/).

  10. Strengthening Software Authentication with the ROSE Software Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, G

    2006-06-15

    Many recent nonproliferation and arms control software projects include a software authentication regime. These include U.S. Government-sponsored projects both in the United States and in the Russian Federation (RF). This trend toward requiring software authentication is only accelerating. Demonstrating assurance that software performs as expected without hidden ''backdoors'' is crucial to a project's success. In this context, ''authentication'' is defined as determining that a software package performs only its intended purpose and performs said purpose correctly and reliably over the planned duration of an agreement. In addition to visual inspections by knowledgeable computer scientists, automated tools are needed to highlightmore » suspicious code constructs, both to aid visual inspection and to guide program development. While many commercial tools are available for portions of the authentication task, they are proprietary and not extensible. An open-source, extensible tool can be customized to the unique needs of each project (projects can have both common and custom rules to detect flaws and security holes). Any such extensible tool has to be based on a complete language compiler. ROSE is precisely such a compiler infrastructure developed within the Department of Energy (DOE) and targeted at the optimization of scientific applications and user-defined libraries within large-scale applications (typically applications of a million lines of code). ROSE is a robust, source-to-source analysis and optimization infrastructure currently addressing large, million-line DOE applications in C and C++ (handling the full C, C99, C++ languages and with current collaborations to support Fortran90). We propose to extend ROSE to address a number of security-specific requirements, and apply it to software authentication for nonproliferation and arms control projects.« less

  11. Python-Assisted MODFLOW Application and Code Development

    NASA Astrophysics Data System (ADS)

    Langevin, C.

    2013-12-01

    The U.S. Geological Survey (USGS) has a long history of developing and maintaining free, open-source software for hydrological investigations. The MODFLOW program is one of the most popular hydrologic simulation programs released by the USGS, and it is considered to be the most widely used groundwater flow simulation code. MODFLOW was written using a modular design and a procedural FORTRAN style, which resulted in code that could be understood, modified, and enhanced by many hydrologists. The code is fast, and because it uses standard FORTRAN it can be run on most operating systems. Most MODFLOW users rely on proprietary graphical user interfaces for constructing models and viewing model results. Some recent efforts, however, have focused on construction of MODFLOW models using open-source Python scripts. Customizable Python packages, such as FloPy (https://code.google.com/p/flopy), can be used to generate input files, read simulation results, and visualize results in two and three dimensions. Automating this sequence of steps leads to models that can be reproduced directly from original data and rediscretized in space and time. Python is also being used in the development and testing of new MODFLOW functionality. New packages and numerical formulations can be quickly prototyped and tested first with Python programs before implementation in MODFLOW. This is made possible by the flexible object-oriented design capabilities available in Python, the ability to call FORTRAN code from Python, and the ease with which linear systems of equations can be solved using SciPy, for example. Once new features are added to MODFLOW, Python can then be used to automate comprehensive regression testing and ensure reliability and accuracy of new versions prior to release.

  12. Performance comparison of AV1, HEVC, and JVET video codecs on 360 (spherical) video

    NASA Astrophysics Data System (ADS)

    Topiwala, Pankaj; Dai, Wei; Krishnan, Madhu; Abbas, Adeel; Doshi, Sandeep; Newman, David

    2017-09-01

    This paper compares the coding efficiency performance on 360 videos, of three software codecs: (a) AV1 video codec from the Alliance for Open Media (AOM); (b) the HEVC Reference Software HM; and (c) the JVET JEM Reference SW. Note that 360 video is especially challenging content, in that one codes full res globally, but typically looks locally (in a viewport), which magnifies errors. These are tested in two different projection formats ERP and RSP, to check consistency. Performance is tabulated for 1-pass encoding on two fronts: (1) objective performance based on end-to-end (E2E) metrics such as SPSNR-NN, and WS-PSNR, currently developed in the JVET committee; and (2) informal subjective assessment of static viewports. Constant quality encoding is performed with all the three codecs for an unbiased comparison of the core coding tools. Our general conclusion is that under constant quality coding, AV1 underperforms HEVC, which underperforms JVET. We also test with rate control, where AV1 currently underperforms the open source X265 HEVC codec. Objective and visual evidence is provided.

  13. An open science approach to modeling and visualizing ...

    EPA Pesticide Factsheets

    It is expected that cyanobacteria blooms will increase in frequency, duration, and severity as inputs of nutrients increase and the impacts of climate change are realized. Partly in response to this, federal, state, and local entities have ramped up efforts to better understand blooms which has resulted in new life for old datasets, new monitoring programs, and novel uses for non-traditional sources of data. To fully benefit from these datasets, it is also imperative that the full body of work including data, code, and manuscripts be openly available (i.e., open science). This presentation will provide several examples of our work which occurs at the intersection of open science and research on cyanobacetria blooms in lakes and ponds. In particular we will discuss 1) why open science is particularly important for environmental human health issues; 2) the lakemorpho and elevatr R packages and how we use those to model lake morphometry; 3) Shiny server applications to visualize data collected as part of the Cyanobacteria Monitoring Collaborative; and 4) distribution of our research and models via open access publications and as R packages on GitHub. Modelling and visualizing information on cyanobacteria blooms is important as it provides estimates of the extent of potential problems associated with these blooms. Furthermore, conducting this work in the open allows others to access our code, data, and results. In turn, this allows for a greater impact because the

  14. An inexpensive Arduino-based LED stimulator system for vision research.

    PubMed

    Teikari, Petteri; Najjar, Raymond P; Malkki, Hemi; Knoblauch, Kenneth; Dumortier, Dominique; Gronfier, Claude; Cooper, Howard M

    2012-11-15

    Light emitting diodes (LEDs) are being used increasingly as light sources in life sciences applications such as in vision research, fluorescence microscopy and in brain-computer interfacing. Here we present an inexpensive but effective visual stimulator based on light emitting diodes (LEDs) and open-source Arduino microcontroller prototyping platform. The main design goal of our system was to use off-the-shelf and open-source components as much as possible, and to reduce design complexity allowing use of the system to end-users without advanced electronics skills. The main core of the system is a USB-connected Arduino microcontroller platform designed initially with a specific emphasis on the ease-of-use creating interactive physical computing environments. The pulse-width modulation (PWM) signal of Arduino was used to drive LEDs allowing linear light intensity control. The visual stimulator was demonstrated in applications such as murine pupillometry, rodent models for cognitive research, and heterochromatic flicker photometry in human psychophysics. These examples illustrate some of the possible applications that can be easily implemented and that are advantageous for students, educational purposes and universities with limited resources. The LED stimulator system was developed as an open-source project. Software interface was developed using Python with simplified examples provided for Matlab and LabVIEW. Source code and hardware information are distributed under the GNU General Public Licence (GPL, version 3). Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.

    PubMed

    Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao

    2018-02-01

    Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

  16. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  17. Impaired letter-string processing in developmental dyslexia: what visual-to-phonology code mapping disorder?

    PubMed

    Valdois, Sylviane; Lassus-Sangosse, Delphine; Lobier, Muriel

    2012-05-01

    Poor parallel letter-string processing in developmental dyslexia was taken as evidence of poor visual attention (VA) span, that is, a limitation of visual attentional resources that affects multi-character processing. However, the use of letter stimuli in oral report tasks was challenged on its capacity to highlight a VA span disorder. In particular, report of poor letter/digit-string processing but preserved symbol-string processing was viewed as evidence of poor visual-to-phonology code mapping, in line with the phonological theory of developmental dyslexia. We assessed here the visual-to-phonological-code mapping disorder hypothesis. In Experiment 1, letter-string, digit-string and colour-string processing was assessed to disentangle a phonological versus visual familiarity account of the letter/digit versus symbol dissociation. Against a visual-to-phonological-code mapping disorder but in support of a familiarity account, results showed poor letter/digit-string processing but preserved colour-string processing in dyslexic children. In Experiment 2, two tasks of letter-string report were used, one of which was performed simultaneously to a high-taxing phonological task. Results show that dyslexic children are similarly impaired in letter-string report whether a concurrent phonological task is simultaneously performed or not. Taken together, these results provide strong evidence against a phonological account of poor letter-string processing in developmental dyslexia. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Association rule mining on grid monitoring data to detect error sources

    NASA Astrophysics Data System (ADS)

    Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin

    2010-04-01

    Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.

  19. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography.

    PubMed

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa

    2017-03-01

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.

  20. Television News Without Pictures?

    ERIC Educational Resources Information Center

    Graber, Doris A.

    1987-01-01

    Describes "gestalt" coding procedures that concentrate on the meanings conveyed by audio-visual messages rather than on coding individual pictorial elements shown in a news story. Discusses the totality of meaning that results from the interaction of verbal and visual story elements, external settings, and the decoding proclivities of…

  1. The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres

    PubMed Central

    Ince, Robin A. A.; Jaworska, Katarzyna; Gross, Joachim; Panzeri, Stefano; van Rijsbergen, Nicola J.; Rousselet, Guillaume A.; Schyns, Philippe G.

    2016-01-01

    A key to understanding visual cognition is to determine “where”, “when”, and “how” brain responses reflect the processing of the specific visual features that modulate categorization behavior—the “what”. The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features. PMID:27550865

  2. EO/IR scene generation open source initiative for real-time hardware-in-the-loop and all-digital simulation

    NASA Astrophysics Data System (ADS)

    Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.

    2011-06-01

    The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.

  3. CircularLogo: A lightweight web application to visualize intra-motif dependencies.

    PubMed

    Ye, Zhenqing; Ma, Tao; Kalmbach, Michael T; Dasari, Surendra; Kocher, Jean-Pierre A; Wang, Liguo

    2017-05-22

    The sequence logo has been widely used to represent DNA or RNA motifs for more than three decades. Despite its intelligibility and intuitiveness, the traditional sequence logo is unable to display the intra-motif dependencies and therefore is insufficient to fully characterize nucleotide motifs. Many methods have been developed to quantify the intra-motif dependencies, but fewer tools are available for visualization. We developed CircularLogo, a web-based interactive application, which is able to not only visualize the position-specific nucleotide consensus and diversity but also display the intra-motif dependencies. Applying CircularLogo to HNF6 binding sites and tRNA sequences demonstrated its ability to show intra-motif dependencies and intuitively reveal biomolecular structure. CircularLogo is implemented in JavaScript and Python based on the Django web framework. The program's source code and user's manual are freely available at http://circularlogo.sourceforge.net . CircularLogo web server can be accessed from http://bioinformaticstools.mayo.edu/circularlogo/index.html . CircularLogo is an innovative web application that is specifically designed to visualize and interactively explore intra-motif dependencies.

  4. Visualization: a tool for enhancing students' concept images of basic object-oriented concepts

    NASA Astrophysics Data System (ADS)

    Cetin, Ibrahim

    2013-03-01

    The purpose of this study was twofold: to investigate students' concept images about class, object, and their relationship and to help them enhance their learning of these notions with a visualization tool. Fifty-six second-year university students participated in the study. To investigate his/her concept images, the researcher developed a survey including open-ended questions, which was administered to the participants. Follow-up interviews with 12 randomly selected students were conducted to explore their answers to the survey in depth. The results of the first part of the research were utilized to construct visualization scenarios. The students used these scenarios to develop animations using Flash software. The study found that most of the students experienced difficulties in learning object-oriented notions. Overdependence on code-writing practice and examples and incorrectly learned analogies were determined to be the sources of their difficulties. Moreover, visualization was found to be a promising approach in facilitating students' concept images of basic object-oriented notions. The results of this study have implications for researchers and practitioners when designing programming instruction.

  5. DNA Data Visualization (DDV): Software for Generating Web-Based Interfaces Supporting Navigation and Analysis of DNA Sequence Data of Entire Genomes.

    PubMed

    Neugebauer, Tomasz; Bordeleau, Eric; Burrus, Vincent; Brzezinski, Ryszard

    2015-01-01

    Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.

  6. An integrated network visualization framework towards metabolic engineering applications.

    PubMed

    Noronha, Alberto; Vilaça, Paulo; Rocha, Miguel

    2014-12-30

    Over the last years, several methods for the phenotype simulation of microorganisms, under specified genetic and environmental conditions have been proposed, in the context of Metabolic Engineering (ME). These methods provided insight on the functioning of microbial metabolism and played a key role in the design of genetic modifications that can lead to strains of industrial interest. On the other hand, in the context of Systems Biology research, biological network visualization has reinforced its role as a core tool in understanding biological processes. However, it has been scarcely used to foster ME related methods, in spite of the acknowledged potential. In this work, an open-source software that aims to fill the gap between ME and metabolic network visualization is proposed, in the form of a plugin to the OptFlux ME platform. The framework is based on an abstract layer, where the network is represented as a bipartite graph containing minimal information about the underlying entities and their desired relative placement. The framework provides input/output support for networks specified in standard formats, such as XGMML, SBGN or SBML, providing a connection to genome-scale metabolic models. An user-interface makes it possible to edit, manipulate and query nodes in the network, providing tools to visualize diverse effects, including visual filters and aspect changing (e.g. colors, shapes and sizes). These tools are particularly interesting for ME, since they allow overlaying phenotype simulation results or elementary flux modes over the networks. The framework and its source code are freely available, together with documentation and other resources, being illustrated with well documented case studies.

  7. In Vivo Differentiation of Uric Acid Versus Non-Uric Acid Urinary Calculi With Third-Generation Dual-Source Dual-Energy CT at Reduced Radiation Dose.

    PubMed

    Franken, Axelle; Gevenois, Pierre Alain; Muylem, Alain Van; Howarth, Nigel; Keyzer, Caroline

    2018-02-01

    The objective of our study was to evaluate in vivo urinary calculus characterization with third-generation dual-source dual-energy CT (DECT) at reduced versus standard radiation dose. One hundred fifty-three patients requiring unenhanced CT for suspected or known urolithiasis were prospectively included in our study. They underwent two acquisitions at reduced-dose CT (90 kV and 50 mAs ref ; Sn150 kV and 31 mAs ref , where Sn denotes the interposition of a tin filter in the high-energy beam) and standard-dose CT (90 kV and 50 mAs ref ; Sn150 kV and 94 mAs ref ). One radiologist interpreted the reduced-dose examinations before the standard-dose examinations during the same session. Among 103 patients (23 women, 80 men; mean age ± SD, 50 ± 15 years; age range, 18-82 years) with urolithiasis, dedicated DECT software measured the maximal diameter and CT numbers, calculated the DECT number ratio, and labeled with a color code each calculus visualized by the radiologist as uric acid (UA) or non-UA. Volume CT dose index (CTDI vol ) and dose-length product (DLP) were recorded. The radiologist visualized 279 calculi on standard-dose CT and 262 on reduced-dose CT; 17 calculi were missed on reduced-dose CT, all of which were ≤ 3 mm. Among the 262 calculi visualized at both doses, the CT number ratio was obtained with the software for 227 calculi and was not different between the doses (p = 0.093). Among these 262 calculi, 197 were labeled at both doses; 194 of the 197 labeled calculi were labeled with the same color code. Among the 65 remaining calculi, 48 and 61 (all ≤ 5 mm) were not labeled at standard-dose and reduced-dose CT (p = 0.005), respectively. At reduced-dose CT, the mean CTDI vol was 2.67 mGy and the mean DLP was 102.2 mGy × cm. With third-generation dual-source DECT, a larger proportion of calculi ≤ 5 mm are not characterized as UA or non-UA at a reduced dose.

  8. Experimental design and analysis of JND test on coded image/video

    NASA Astrophysics Data System (ADS)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  9. Beyond Visual Communication Technology.

    ERIC Educational Resources Information Center

    Bell, Thomas P.

    1993-01-01

    Discusses various aspects of visual communication--light, semiotics, codes, photography, typography, and visual literacy--within the context of the communications technology area of technology education. (SK)

  10. A method for closed-loop presentation of sensory stimuli conditional on the internal brain-state of awake animals

    PubMed Central

    Rutishauser, Ueli; Kotowicz, Andreas; Laurent, Gilles

    2013-01-01

    Brain activity often consists of interactions between internal—or on-going—and external—or sensory—activity streams, resulting in complex, distributed patterns of neural activity. Investigation of such interactions could benefit from closed-loop experimental protocols in which one stream can be controlled depending on the state of the other. We describe here methods to present rapid and precisely timed visual stimuli to awake animals, conditional on features of the animal’s on-going brain state; those features are the presence, power and phase of oscillations in local field potentials (LFP). The system can process up to 64 channels in real time. We quantified its performance using simulations, synthetic data and animal experiments (chronic recordings in the dorsal cortex of awake turtles). The delay from detection of an oscillation to the onset of a visual stimulus on an LCD screen was 47.5 ms and visual-stimulus onset could be locked to the phase of ongoing oscillations at any frequency ≤40 Hz. Our software’s architecture is flexible, allowing on-the-fly modifications by experimenters and the addition of new closed-loop control and analysis components through plugins. The source code of our system “StimOMatic” is available freely as open-source. PMID:23473800

  11. The General Mission Analysis Tool (GMAT): Current Features And Adding Custom Functionality

    NASA Technical Reports Server (NTRS)

    Conway, Darrel J.; Hughes, Steven P.

    2010-01-01

    The General Mission Analysis Tool (GMAT) is a software system for trajectory optimization, mission analysis, trajectory estimation, and prediction developed by NASA, the Air Force Research Lab, and private industry. GMAT's design and implementation are based on four basic principles: open source visibility for both the source code and design documentation; platform independence; modular design; and user extensibility. The system, released under the NASA Open Source Agreement, runs on Windows, Mac and Linux. User extensions, loaded at run time, have been built for optimization, trajectory visualization, force model extension, and estimation, by parties outside of GMAT's development group. The system has been used to optimize maneuvers for the Lunar Crater Observation and Sensing Satellite (LCROSS) and ARTEMIS missions and is being used for formation design and analysis for the Magnetospheric Multiscale Mission (MMS).

  12. Exclusively Visual Analysis of Classroom Group Interactions

    ERIC Educational Resources Information Center

    Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric

    2016-01-01

    Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data…

  13. Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks

    ERIC Educational Resources Information Center

    Yu, Chao

    2013-01-01

    In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…

  14. Motion-related resource allocation in dynamic wireless visual sensor network environments.

    PubMed

    Katsenou, Angeliki V; Kondi, Lisimachos P; Parsopoulos, Konstantinos E

    2014-01-01

    This paper investigates quality-driven cross-layer optimization for resource allocation in direct sequence code division multiple access wireless visual sensor networks. We consider a single-hop network topology, where each sensor transmits directly to a centralized control unit (CCU) that manages the available network resources. Our aim is to enable the CCU to jointly allocate the transmission power and source-channel coding rates for each node, under four different quality-driven criteria that take into consideration the varying motion characteristics of each recorded video. For this purpose, we studied two approaches with a different tradeoff of quality and complexity. The first one allocates the resources individually for each sensor, whereas the second clusters them according to the recorded level of motion. In order to address the dynamic nature of the recorded scenery and re-allocate the resources whenever it is dictated by the changes in the amount of motion in the scenery, we propose a mechanism based on the particle swarm optimization algorithm, combined with two restarting schemes that either exploit the previously determined resource allocation or conduct a rough estimation of it. Experimental simulations demonstrate the efficiency of the proposed approaches.

  15. Assessment & Commitment Tracking System (ACTS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryant, Robert A.; Childs, Teresa A.; Miller, Michael A.

    2004-12-20

    The ACTS computer code provides a centralized tool for planning and scheduling assessments, tracking and managing actions associated with assessments or that result from an event or condition, and "mining" data for reporting and analyzing information for improving performance. The ACTS application is designed to work with the MS SQL database management system. All database interfaces are written in SQL. The following software is used to develop and support the ACTS application: Cold Fusion HTML JavaScript Quest TOAD Microsoft Visual Source Safe (VSS) HTML Mailer for sending email Microsoft SQL Microsoft Internet Information Server

  16. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1984-01-01

    Several short summaries of the work performed during this reporting period are presented. Topics discussed in this document include: (1) resilient seeded errors via simple techniques; (2) knowledge representation for engineering design; (3) analysis of faults in a multiversion software experiment; (4) implementation of parallel programming environment; (5) symbolic execution of concurrent programs; (6) two computer graphics systems for visualization of pressure distribution and convective density particles; (7) design of a source code management system; (8) vectorizing incomplete conjugate gradient on the Cyber 203/205; (9) extensions of domain testing theory and; (10) performance analyzer for the pisces system.

  17. Active-duty military service members’ visual representations of PTSD and TBI in masks

    PubMed Central

    Walker, Melissa S.; Kaimal, Girija; Gonzaga, Adele M. L.; Myers-Coffman, Katherine A.; DeGraba, Thomas J.

    2017-01-01

    ABSTRACT Active-duty military service members have a significant risk of sustaining physical and psychological trauma resulting in traumatic brain injury (TBI) and post-traumatic stress disorder (PTSD). Within an interdisciplinary treatment approach at the National Intrepid Center of Excellence, service members participated in mask making during art therapy sessions. This study presents an analysis of the mask-making experiences of service members (n = 370) with persistent symptoms from combat- and mission-related TBI, PTSD, and other concurrent mood issues. Data sources included mask images and therapist notes collected over a five-year period. The data were coded and analyzed using grounded theory methods. Findings indicated that mask making offered visual representations of the self related to individual personhood, relationships, community, and society. Imagery themes referenced the injury, relational supports/losses, identity transitions/questions, cultural metaphors, existential reflections, and conflicted sense of self. These visual insights provided an increased understanding of the experiences of service members, facilitating their recovery. PMID:28452610

  18. Velocity Mapping Toolbox (VMT): a processing and visualization suite for moving-vessel ADCP measurements

    USGS Publications Warehouse

    Parsons, D.R.; Jackson, P.R.; Czuba, J.A.; Engel, F.L.; Rhoads, B.L.; Oberg, K.A.; Best, J.L.; Mueller, D.S.; Johnson, K.K.; Riley, J.D.

    2013-01-01

    The use of acoustic Doppler current profilers (ADCP) for discharge measurements and three-dimensional flow mapping has increased rapidly in recent years and has been primarily driven by advances in acoustic technology and signal processing. Recent research has developed a variety of methods for processing data obtained from a range of ADCP deployments and this paper builds on this progress by describing new software for processing and visualizing ADCP data collected along transects in rivers or other bodies of water. The new utility, the Velocity Mapping Toolbox (VMT), allows rapid processing (vector rotation, projection, averaging and smoothing), visualization (planform and cross-section vector and contouring), and analysis of a range of ADCP-derived datasets. The paper documents the data processing routines in the toolbox and presents a set of diverse examples that demonstrate its capabilities. The toolbox is applicable to the analysis of ADCP data collected in a wide range of aquatic environments and is made available as open-source code along with this publication.

  19. Active-duty military service members' visual representations of PTSD and TBI in masks.

    PubMed

    Walker, Melissa S; Kaimal, Girija; Gonzaga, Adele M L; Myers-Coffman, Katherine A; DeGraba, Thomas J

    2017-12-01

    Active-duty military service members have a significant risk of sustaining physical and psychological trauma resulting in traumatic brain injury (TBI) and post-traumatic stress disorder (PTSD). Within an interdisciplinary treatment approach at the National Intrepid Center of Excellence, service members participated in mask making during art therapy sessions. This study presents an analysis of the mask-making experiences of service members (n = 370) with persistent symptoms from combat- and mission-related TBI, PTSD, and other concurrent mood issues. Data sources included mask images and therapist notes collected over a five-year period. The data were coded and analyzed using grounded theory methods. Findings indicated that mask making offered visual representations of the self related to individual personhood, relationships, community, and society. Imagery themes referenced the injury, relational supports/losses, identity transitions/questions, cultural metaphors, existential reflections, and conflicted sense of self. These visual insights provided an increased understanding of the experiences of service members, facilitating their recovery.

  20. Effects of Secondary Task Modality and Processing Code on Automation Trust and Utilization During Simulated Airline Luggage Screening

    NASA Technical Reports Server (NTRS)

    Phillips, Rachel; Madhavan, Poornima

    2010-01-01

    The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence interaction with automation via significant effects on trust and system utilization. These findings have implications for both automation design and operator training.

  1. Extending a CAD-Based Cartesian Mesh Generator for the Lattice Boltzmann Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cantrell, J Nathan; Inclan, Eric J; Joshi, Abhijit S

    2012-01-01

    This paper describes the development of a custom preprocessor for the PaRAllel Thermal Hydraulics simulations using Advanced Mesoscopic methods (PRATHAM) code based on an open-source mesh generator, CartGen [1]. PRATHAM is a three-dimensional (3D) lattice Boltzmann method (LBM) based parallel flow simulation software currently under development at the Oak Ridge National Laboratory. The LBM algorithm in PRATHAM requires a uniform, coordinate system-aligned, non-body-fitted structured mesh for its computational domain. CartGen [1], which is a GNU-licensed open source code, already comes with some of the above needed functionalities. However, it needs to be further extended to fully support the LBM specificmore » preprocessing requirements. Therefore, CartGen is being modified to (i) be compiler independent while converting a neutral-format STL (Stereolithography) CAD geometry to a uniform structured Cartesian mesh, (ii) provide a mechanism for PRATHAM to import the mesh and identify the fluid/solid domains, and (iii) provide a mechanism to visually identify and tag the domain boundaries on which to apply different boundary conditions.« less

  2. SnopViz, an interactive snow profile visualization tool

    NASA Astrophysics Data System (ADS)

    Fierz, Charles; Egger, Thomas; gerber, Matthias; Bavay, Mathias; Techel, Frank

    2016-04-01

    SnopViz is a visualization tool for both simulation outputs of the snow-cover model SNOWPACK and observed snow profiles. It has been designed to fulfil the needs of operational services (Swiss Avalanche Warning Service, Avalanche Canada) as well as offer the flexibility required to satisfy the specific needs of researchers. This JavaScript application runs on any modern browser and does not require an active Internet connection. The open source code is available for download from models.slf.ch where examples can also be run. Both the SnopViz library and the SnopViz User Interface will become a full replacement of the current research visualization tool SN_GUI for SNOWPACK. The SnopViz library is a stand-alone application that parses the provided input files, for example, a single snow profile (CAAML file format) or multiple snow profiles as output by SNOWPACK (PRO file format). A plugin architecture allows for handling JSON objects (JavaScript Object Notation) as well and plugins for other file formats may be added easily. The outputs are provided either as vector graphics (SVG) or JSON objects. The SnopViz User Interface (UI) is a browser based stand-alone interface. It runs in every modern browser, including IE, and allows user interaction with the graphs. SVG, the XML based standard for vector graphics, was chosen because of its easy interaction with JS and a good software support (Adobe Illustrator, Inkscape) to manipulate graphs outside SnopViz for publication purposes. SnopViz provides new visualization for SNOWPACK timeline output as well as time series input and output. The actual output format for SNOWPACK timelines was retained while time series are read from SMET files, a file format used in conjunction with the open source data handling code MeteoIO. Finally, SnopViz is able to render single snow profiles, either observed or modelled, that are provided as CAAML-file. This file format (caaml.org/Schemas/V5.0/Profiles/SnowProfileIACS) is an international standard to exchange snow profile data. It is supported by the International Association of Cryospheric Sciences (IACS) and was developed in collaboration with practitioners (Avalanche Canada).

  3. Indexing sensory plasticity: Evidence for distinct Predictive Coding and Hebbian learning mechanisms in the cerebral cortex.

    PubMed

    Spriggs, M J; Sumner, R L; McMillan, R L; Moran, R J; Kirk, I J; Muthukumaraswamy, S D

    2018-04-30

    The Roving Mismatch Negativity (MMN), and Visual LTP paradigms are widely used as independent measures of sensory plasticity. However, the paradigms are built upon fundamentally different (and seemingly opposing) models of perceptual learning; namely, Predictive Coding (MMN) and Hebbian plasticity (LTP). The aim of the current study was to compare the generative mechanisms of the MMN and visual LTP, therefore assessing whether Predictive Coding and Hebbian mechanisms co-occur in the brain. Forty participants were presented with both paradigms during EEG recording. Consistent with Predictive Coding and Hebbian predictions, Dynamic Causal Modelling revealed that the generation of the MMN modulates forward and backward connections in the underlying network, while visual LTP only modulates forward connections. These results suggest that both Predictive Coding and Hebbian mechanisms are utilized by the brain under different task demands. This therefore indicates that both tasks provide unique insight into plasticity mechanisms, which has important implications for future studies of aberrant plasticity in clinical populations. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Efficient visual object and word recognition relies on high spatial frequency coding in the left posterior fusiform gyrus: evidence from a case-series of patients with ventral occipito-temporal cortex damage.

    PubMed

    Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A

    2013-11-01

    Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.

  5. Temporal dynamics of the knowledge-mediated visual disambiguation process in humans: a magnetoencephalography study.

    PubMed

    Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo

    2015-01-01

    Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. gadfly: A pandas-based Framework for Analyzing GADGET Simulation Data

    NASA Astrophysics Data System (ADS)

    Hummel, Jacob A.

    2016-11-01

    We present the first public release (v0.1) of the open-source gadget Dataframe Library: gadfly. The aim of this package is to leverage the capabilities of the broader python scientific computing ecosystem by providing tools for analyzing simulation data from the astrophysical simulation codes gadget and gizmo using pandas, a thoroughly documented, open-source library providing high-performance, easy-to-use data structures that is quickly becoming the standard for data analysis in python. Gadfly is a framework for analyzing particle-based simulation data stored in the HDF5 format using pandas DataFrames. The package enables efficient memory management, includes utilities for unit handling, coordinate transformations, and parallel batch processing, and provides highly optimized routines for visualizing smoothed-particle hydrodynamics data sets.

  7. Coastal Online Analysis and Synthesis Tool 2.0 (COAST)

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Navard, Andrew R.; Nguyen, Beth T.

    2009-01-01

    The Coastal Online Assessment and Synthesis Tool (COAST) 3D geobrowser has been developed to integrate disparate coastal datasets from NASA and other sources into a desktop tool that provides new data visualization and analysis capabilities for coastal researchers, managers, and residents. It is built upon the widely used NASA-developed open source World Wind geobrowser from NASA Ames (Patrick Hogan et al.) .Net and C# version is used for development. It is leveraged off of World Wind community shared code samples and COAST 2.0 enhancement direction is based on Coastal science community feedback and needs assessment (GOMA). The main objective is to empower the user to bring more user-meaningful data into multi-layered, multi-temporal spatial context.

  8. Scater: pre-processing, quality control, normalization and visualization of single-cell RNA-seq data in R.

    PubMed

    McCarthy, Davis J; Campbell, Kieran R; Lun, Aaron T L; Wills, Quin F

    2017-04-15

    Single-cell RNA sequencing (scRNA-seq) is increasingly used to study gene expression at the level of individual cells. However, preparing raw sequence data for further analysis is not a straightforward process. Biases, artifacts and other sources of unwanted variation are present in the data, requiring substantial time and effort to be spent on pre-processing, quality control (QC) and normalization. We have developed the R/Bioconductor package scater to facilitate rigorous pre-processing, quality control, normalization and visualization of scRNA-seq data. The package provides a convenient, flexible workflow to process raw sequencing reads into a high-quality expression dataset ready for downstream analysis. scater provides a rich suite of plotting tools for single-cell data and a flexible data structure that is compatible with existing tools and can be used as infrastructure for future software development. The open-source code, along with installation instructions, vignettes and case studies, is available through Bioconductor at http://bioconductor.org/packages/scater . davis@ebi.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  9. Color-coded visualization of magnetic resonance imaging multiparametric maps

    NASA Astrophysics Data System (ADS)

    Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit

    2017-01-01

    Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data.

  10. Report of experiments and evidence for ASC L2 milestone 4467 : demonstration of a legacy application's path to exascale.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew L.; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke

    2012-03-01

    This report documents thirteen of Sandia's contributions to the Computational Systems and Software Environment (CSSE) within the Advanced Simulation and Computing (ASC) program between fiscal years 2009 and 2012. It describes their impact on ASC applications. Most contributions are implemented in lower software levels allowing for application improvement without source code changes. Improvements are identified in such areas as reduced run time, characterizing power usage, and Input/Output (I/O). Other experiments are more forward looking, demonstrating potential bottlenecks using mini-application versions of the legacy codes and simulating their network activity on Exascale-class hardware. The purpose of this report is to provemore » that the team has completed milestone 4467-Demonstration of a Legacy Application's Path to Exascale. Cielo is expected to be the last capability system on which existing ASC codes can run without significant modifications. This assertion will be tested to determine where the breaking point is for an existing highly scalable application. The goal is to stretch the performance boundaries of the application by applying recent CSSE RD in areas such as resilience, power, I/O, visualization services, SMARTMAP, lightweight LWKs, virtualization, simulation, and feedback loops. Dedicated system time reservations and/or CCC allocations will be used to quantify the impact of system-level changes to extend the life and performance of the ASC code base. Finally, a simulation of anticipated exascale-class hardware will be performed using SST to supplement the calculations. Determine where the breaking point is for an existing highly scalable application: Chapter 15 presented the CSSE work that sought to identify the breaking point in two ASC legacy applications-Charon and CTH. Their mini-app versions were also employed to complete the task. There is no single breaking point as more than one issue was found with the two codes. The results were that applications can expect to encounter performance issues related to the computing environment, system software, and algorithms. Careful profiling of runtime performance will be needed to identify the source of an issue, in strong combination with knowledge of system software and application source code.« less

  11. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    PubMed

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  12. Practices in Code Discoverability: Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.

    2012-09-01

    Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.

  13. The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres.

    PubMed

    Ince, Robin A A; Jaworska, Katarzyna; Gross, Joachim; Panzeri, Stefano; van Rijsbergen, Nicola J; Rousselet, Guillaume A; Schyns, Philippe G

    2016-08-22

    A key to understanding visual cognition is to determine "where", "when", and "how" brain responses reflect the processing of the specific visual features that modulate categorization behavior-the "what". The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features. © The Author 2016. Published by Oxford University Press.

  14. Seeing the mean: ensemble coding for sets of faces.

    PubMed

    Haberman, Jason; Whitney, David

    2009-06-01

    We frequently encounter groups of similar objects in our visual environment: a bed of flowers, a basket of oranges, a crowd of people. How does the visual system process such redundancy? Research shows that rather than code every element in a texture, the visual system favors a summary statistical representation of all the elements. The authors demonstrate that although it may facilitate texture perception, ensemble coding also occurs for faces-a level of processing well beyond that of textures. Observers viewed sets of faces varying in emotionality (e.g., happy to sad) and assessed the mean emotion of each set. Although observers retained little information about the individual set members, they had a remarkably precise representation of the mean emotion. Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less. Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination. These findings support the hypothesis that ensemble coding occurs extremely fast at multiple levels of visual analysis. (c) 2009 APA, all rights reserved.

  15. Visual coding of human bodies: perceptual aftereffects reveal norm-based, opponent coding of body identity.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J

    2013-04-01

    Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this aftereffect increased with adaptor extremity, as predicted by norm-based, opponent coding of body identity. A size change between adapt and test bodies minimized the effects of low-level, retinotopic adaptation. These results demonstrate that body identity, like face identity, is opponent coded in higher-level vision. More generally, they show that a norm-based multidimensional framework, which is well established for face perception, may provide a powerful framework for understanding body perception.

  16. NGL Viewer: Web-based molecular graphics for large complexes.

    PubMed

    Rose, Alexander S; Bradley, Anthony R; Valasatava, Yana; Duarte, Jose M; Prlic, Andreas; Rose, Peter W

    2018-05-29

    The interactive visualization of very large macromolecular complexes on the web is becoming a challenging problem as experimental techniques advance at an unprecedented rate and deliver structures of increasing size. We have tackled this problem by developing highly memory-efficient and scalable extensions for the NGL WebGL-based molecular viewer and by using MMTF, a binary and compressed Macromolecular Transmission Format. These enable NGL to download and render molecular complexes with millions of atoms interactively on desktop computers and smartphones alike, making it a tool of choice for web-based molecular visualization in research and education. The source code is freely available under the MIT license at github.com/arose/ngl and distributed on NPM (npmjs.com/package/ngl). MMTF-JavaScript encoders and decoders are available at github.com/rcsb/mmtf-javascript. asr.moin@gmail.com.

  17. Calypso: a user-friendly web-server for mining and visualizing microbiome-environment interactions.

    PubMed

    Zakrzewski, Martha; Proietti, Carla; Ellis, Jonathan J; Hasan, Shihab; Brion, Marie-Jo; Berger, Bernard; Krause, Lutz

    2017-03-01

    Calypso is an easy-to-use online software suite that allows non-expert users to mine, interpret and compare taxonomic information from metagenomic or 16S rDNA datasets. Calypso has a focus on multivariate statistical approaches that can identify complex environment-microbiome associations. The software enables quantitative visualizations, statistical testing, multivariate analysis, supervised learning, factor analysis, multivariable regression, network analysis and diversity estimates. Comprehensive help pages, tutorials and videos are provided via a wiki page. The web-interface is accessible via http://cgenome.net/calypso/ . The software is programmed in Java, PERL and R and the source code is available from Zenodo ( https://zenodo.org/record/50931 ). The software is freely available for non-commercial users. l.krause@uq.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  18. An object oriented fully 3D tomography visual toolkit.

    PubMed

    Agostinelli, S; Paoli, G

    2001-04-01

    In this paper we present a modern object oriented component object model (COMM) C + + toolkit dedicated to fully 3D cone-beam tomography. The toolkit allows the display and visual manipulation of analytical phantoms, projection sets and volumetric data through a standard Windows graphical user interface. Data input/output is performed using proprietary file formats but import/export of industry standard file formats, including raw binary, Windows bitmap and AVI, ACR/NEMA DICOMM 3 and NCSA HDF is available. At the time of writing built-in implemented data manipulators include a basic phantom ray-tracer and a Matrox Genesis frame grabbing facility. A COMM plug-in interface is provided for user-defined custom backprojector algorithms: a simple Feldkamp ActiveX control, including source code, is provided as an example; our fast Feldkamp plug-in is also available.

  19. World Wind 3D Earth Viewing

    NASA Technical Reports Server (NTRS)

    Hogan, Patrick; Maxwell, Christopher; Kim, Randolph; Gaskins, Tom

    2007-01-01

    World Wind allows users to zoom from satellite altitude down to any place on Earth, leveraging high-resolution LandSat imagery and SRTM (Shuttle Radar Topography Mission) elevation data to experience Earth in visually rich 3D. In addition to Earth, World Wind can also visualize other planets, and there are already comprehensive data sets for Mars and the Earth's moon, which are as easily accessible as those of Earth. There have been more than 20 million downloads to date, and the software is being used heavily by the Department of Defense due to the code s ability to be extended and the evolution of the code courtesy of NASA and the user community. Primary features include the dynamic access to public domain imagery and its ease of use. All one needs to control World Wind is a two-button mouse. Additional guides and features can be accessed through a simplified menu. A JAVA version will be available soon. Navigation is automated with single clicks of a mouse, or by typing in any location to automatically zoom in to see it. The World Wind install package contains the necessary requirements such as the .NET runtime and managed DirectX library. World Wind can display combinations of data from a variety of sources, including Blue Marble, LandSat 7, SRTM, NASA Scientific Visualization Studio, GLOBE, and much more. A thorough list of features, the user manual, a key chart, and screen shots are available at http://worldwind.arc.nasa.gov.

  20. Intuitive representation of surface properties of biomolecules using BioBlender.

    PubMed

    Andrei, Raluca Mihaela; Callieri, Marco; Zini, Maria Francesca; Loni, Tiziana; Maraziti, Giuseppe; Pan, Mike Chen; Zoppè, Monica

    2012-03-28

    In living cells, proteins are in continuous motion and interaction with the surrounding medium and/or other proteins and ligands. These interactions are mediated by protein features such as electrostatic and lipophilic potentials. The availability of protein structures enables the study of their surfaces and surface characteristics, based on atomic contribution. Traditionally, these properties are calculated by physico-chemical programs and visualized as range of colors that vary according to the tool used and imposes the necessity of a legend to decrypt it. The use of color to encode both characteristics makes the simultaneous visualization almost impossible, requiring these features to be visualized in different images. In this work, we describe a novel and intuitive code for the simultaneous visualization of these properties. Recent advances in 3D animation and rendering software have not yet been exploited for the representation of biomolecules in an intuitive, animated form. For our purpose we use Blender, an open-source, free, cross-platform application used professionally for 3D work. On the basis Blender, we developed BioBlender, dedicated to biological work: elaboration of protein motion with simultaneous visualization of their chemical and physical features. Electrostatic and lipophilic potentials are calculated using physico-chemical software and scripts, organized and accessed through BioBlender interface. A new visual code is introduced for molecular lipophilic potential: a range of optical features going from smooth-shiny for hydrophobic regions to rough-dull for hydrophilic ones. Electrostatic potential is represented as animated line particles that flow along field lines, proportional to the total charge of the protein. Our system permits visualization of molecular features and, in the case of moving proteins, their continuous perception, calculated for each conformation during motion. Using real world tactile/sight feelings, the nanoscale world of proteins becomes more understandable, familiar to our everyday life, making it easier to introduce "un-seen" phenomena (concepts) such as hydropathy or charges. Moreover, this representation contributes to gain insight into molecular functions by drawing viewer's attention to the most active regions of the protein. The program, available for Windows, Linux and MacOS, can be downloaded freely from the dedicated website http://www.bioblender.eu.

  1. Intuitive representation of surface properties of biomolecules using BioBlender

    PubMed Central

    2012-01-01

    Background In living cells, proteins are in continuous motion and interaction with the surrounding medium and/or other proteins and ligands. These interactions are mediated by protein features such as electrostatic and lipophilic potentials. The availability of protein structures enables the study of their surfaces and surface characteristics, based on atomic contribution. Traditionally, these properties are calculated by physico-chemical programs and visualized as range of colors that vary according to the tool used and imposes the necessity of a legend to decrypt it. The use of color to encode both characteristics makes the simultaneous visualization almost impossible, requiring these features to be visualized in different images. In this work, we describe a novel and intuitive code for the simultaneous visualization of these properties. Methods Recent advances in 3D animation and rendering software have not yet been exploited for the representation of biomolecules in an intuitive, animated form. For our purpose we use Blender, an open-source, free, cross-platform application used professionally for 3D work. On the basis Blender, we developed BioBlender, dedicated to biological work: elaboration of protein motion with simultaneous visualization of their chemical and physical features. Electrostatic and lipophilic potentials are calculated using physico-chemical software and scripts, organized and accessed through BioBlender interface. Results A new visual code is introduced for molecular lipophilic potential: a range of optical features going from smooth-shiny for hydrophobic regions to rough-dull for hydrophilic ones. Electrostatic potential is represented as animated line particles that flow along field lines, proportional to the total charge of the protein. Conclusions Our system permits visualization of molecular features and, in the case of moving proteins, their continuous perception, calculated for each conformation during motion. Using real world tactile/sight feelings, the nanoscale world of proteins becomes more understandable, familiar to our everyday life, making it easier to introduce "un-seen" phenomena (concepts) such as hydropathy or charges. Moreover, this representation contributes to gain insight into molecular functions by drawing viewer's attention to the most active regions of the protein. The program, available for Windows, Linux and MacOS, can be downloaded freely from the dedicated website http://www.bioblender.eu PMID:22536962

  2. Visual loss after spine surgery: a population-based study.

    PubMed

    Patil, Chirag G; Lad, Eleonora M; Lad, Shivanand P; Ho, Chris; Boakye, Maxwell

    2008-06-01

    Retrospective cohort study using National inpatient sample administrative data. To determine national estimates of visual impairment and ischemic optic neuropathy after spine surgery. Loss of vision after spine surgery is rare but has devastating complications that has gained increasing recognition in the recent literature. National population-based studies of visual complications after spine surgery are lacking. All patients from 1993 to 2002 who underwent spine surgery (Clinical Classifications software procedure code: 3, 158) and who had ischemic optic neuropathy (ION) (ICD9-CM code 377.41), central retinal artery occlusion (CRAO) (ICD9-CM code 362.31) or non-ION, non-CRAO perioperative visual impairment (ICD9-CM codes: 369, 368.4, 368.8-9368.11-13) were included. Univariate and multivariate analysis were performed to identify potential risk factors. The overall incidence of visual disturbance after spine surgery was 0.094%. Spine surgery for scoliosis correction and posterior lumbar fusion had the highest rates of postoperative visual loss of 0.28% and 0.14% respectively. Pediatric patients (<18 years) were 5.8 times and elderly patients (>84 years) were 3.2 times more likely than, patients 18 to 44 years of age to develop non-ION, non-CRAO visual loss after spine surgery. Patients with peripheral vascular disease (OR = 2.0), hypertension (OR = 1.3), and those who received blood transfusion (OR = 2.2) were more likely to develop non-ION, non-CRAO vision loss after spine surgery. Ischemic optic neuropathy was present in 0.006% of patients. Hypotension (OR = 10.1), peripheral vascular disease (OR = 6.3) and anemia (OR = 5.9) were the strongest risk factors identified for the development of ION. We used multivariate analysis to identify significant risk factors for visual loss after spine surgery. National population-based estimate of visual impairment after spine surgery confirms that ophthalmic complications after spine surgery are rare. Since visual loss may be reversible in the early stages, awareness, evaluation and prompt management of this rare but potentially devastating complication is critical.

  3. Towards an Analysis of Visual Images in School Science Textbooks and Press Articles about Science and Technology

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Kostas; Koulaidis, Vasilis; Sklaveniti, Spyridoula

    2003-04-01

    This paper aims at presenting the application of a grid for the analysis of the pedagogic functions of visual images included in school science textbooks and daily press articles about science and technology. The analysis is made using the dimensions of content specialisation (classification) and social-pedagogic relationships (framing) promoted by the images as well as the elaboration and abstraction of the corresponding visual code (formality), thus combining pedagogical and socio-semiotic perspectives. The grid is applied to the analysis of 2819 visual images collected from school science textbooks and another 1630 visual images additionally collected from the press. The results show that the science textbooks in comparison to the press material: a) use ten times more images, b) use more images so as to familiarise their readers with the specialised techno-scientific content and codes, and c) tend to create a sense of higher empowerment for their readers by using the visual mode. Furthermore, as the educational level of the school science textbooks (i.e., from primary to lower secondary level) rises, the content specialisation projected by the visual images and the elaboration and abstraction of the corresponding visual code also increases. The above results have implications for the terms and conditions for the effective exploitation of visual material as the educational level rises as well as for the effective incorporation of visual images from press material into science classes.

  4. Facilitating Internet-Scale Code Retrieval

    ERIC Educational Resources Information Center

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  5. Parallelization of sequential Gaussian, indicator and direct simulation algorithms

    NASA Astrophysics Data System (ADS)

    Nunes, Ruben; Almeida, José A.

    2010-08-01

    Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.

  6. The Automated Instrumentation and Monitoring System (AIMS): Design and Architecture. 3.2

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Schmidt, Melisa; Schulbach, Cathy; Bailey, David (Technical Monitor)

    1997-01-01

    Whether a researcher is designing the 'next parallel programming paradigm', another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of such information can help computer and software architects to capture, and therefore, exploit behavioral variations among/within various parallel programs to take advantage of specific hardware characteristics. A software tool-set that facilitates performance evaluation of parallel applications on multiprocessors has been put together at NASA Ames Research Center under the sponsorship of NASA's High Performance Computing and Communications Program over the past five years. The Automated Instrumentation and Monitoring Systematic has three major software components: a source code instrumentor which automatically inserts active event recorders into program source code before compilation; a run-time performance monitoring library which collects performance data; and a visualization tool-set which reconstructs program execution based on the data collected. Besides being used as a prototype for developing new techniques for instrumenting, monitoring and presenting parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Currently, the execution of FORTRAN and C programs on the Intel Paragon and PALM workstations can be automatically instrumented and monitored. Performance data thus collected can be displayed graphically on various workstations. The process of performance tuning with AIMS will be illustrated using various NAB Parallel Benchmarks. This report includes a description of the internal architecture of AIMS and a listing of the source code.

  7. Reading color barcodes using visual snakes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaub, Hanspeter

    2004-05-01

    Statistical pressure snakes are used to track a mono-color target in an unstructured environment using a video camera. The report discusses an algorithm to extract a bar code signal that is embedded within the target. The target is assumed to be rectangular in shape, with the bar code printed in a slightly different saturation and value in HSV color space. Thus, the visual snake, which primarily weighs hue tracking errors, will not be deterred by the presence of the color bar codes in the target. The bar code is generate with the standard 3 of 9 method. Using this method,more » the numeric bar codes reveal if the target is right-side-up or up-side-down.« less

  8. Data and code for the exploratory data analysis of the electrical energy demand in the time domain in Greece.

    PubMed

    Tyralis, Hristos; Karakatsanis, Georgios; Tzouka, Katerina; Mamassis, Nikos

    2017-08-01

    We present data and code for visualizing the electrical energy data and weather-, climate-related and socioeconomic variables in the time domain in Greece. The electrical energy data include hourly demand, weekly-ahead forecasted values of the demand provided by the Greek Independent Power Transmission Operator and pricing values in Greece. We also present the daily temperature in Athens and the Gross Domestic Product of Greece. The code combines the data to a single report, which includes all visualizations with combinations of all variables in multiple time scales. The data and code were used in Tyralis et al. (2017) [1].

  9. How Object-Specific Are Object Files? Evidence for Integration by Location

    ERIC Educational Resources Information Center

    van Dam, Wessel O.; Hommel, Bernhard

    2010-01-01

    Given the distributed representation of visual features in the human brain, binding mechanisms are necessary to integrate visual information about the same perceptual event. It has been assumed that feature codes are bound into object files--pointers to the neural codes of the features of a given event. The present study investigated the…

  10. Learning about Probability from Text and Tables: Do Color Coding and Labeling through an Interactive-User Interface Help?

    ERIC Educational Resources Information Center

    Clinton, Virginia; Morsanyi, Kinga; Alibali, Martha W.; Nathan, Mitchell J.

    2016-01-01

    Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly…

  11. Implant for in-vivo parameter monitoring, processing and transmitting

    DOEpatents

    Ericson, Milton N [Knoxville, TN; McKnight, Timothy E [Greenback, TN; Smith, Stephen F [London, TN; Hylton, James O [Clinton, TN

    2009-11-24

    The present invention relates to a completely implantable intracranial pressure monitor, which can couple to existing fluid shunting systems as well as other internal monitoring probes. The implant sensor produces an analog data signal which is then converted electronically to a digital pulse by generation of a spreading code signal and then transmitted to a location outside the patient by a radio-frequency transmitter to an external receiver. The implanted device can receive power from an internal source as well as an inductive external source. Remote control of the implant is also provided by a control receiver which passes commands from an external source to the implant system logic. Alarm parameters can be programmed into the device which are capable of producing an audible or visual alarm signal. The utility of the monitor can be greatly expanded by using multiple pressure sensors simultaneously or by combining sensors of various physiological types.

  12. Implantable device for in-vivo intracranial and cerebrospinal fluid pressure monitoring

    DOEpatents

    Ericson, Milton N.; McKnight, Timothy E.; Smith, Stephen F.; Hylton, James O.

    2003-01-01

    The present invention relates to a completely implantable intracranial pressure monitor, which can couple to existing fluid shunting systems as well as other internal monitoring probes. The implant sensor produces an analog data signal which is then converted electronically to a digital pulse by generation of a spreading code signal and then transmitted to a location outside the patient by a radio-frequency transmitter to an external receiver. The implanted device can receive power from an internal source as well as an inductive external source. Remote control of the implant is also provided by a control receiver which passes commands from an external source to the implant system logic. Alarm parameters can be programmed into the device which are capable of producing an audible or visual alarm signal. The utility of the monitor can be greatly expanded by using multiple pressure sensors simultaneously or by combining sensors of various physiological types.

  13. Documenting AUTOGEN and APGEN Model Files

    NASA Technical Reports Server (NTRS)

    Gladden, Roy E.; Khanampompan, Teerapat; Fisher, Forest W.; DelGuericio, Chris c.

    2008-01-01

    A computer program called "autogen hypertext map generator" satisfies a need for documenting and assisting in visualization of, and navigation through, model files used in the AUTOGEN and APGEN software mentioned in the two immediately preceding articles. This program parses autogen script files, autogen model files, PERL scripts, and apgen activity-definition files and produces a hypertext map of the files to aid in the navigation of the model. This program also provides a facility for adding notes and descriptions, beyond what is in the source model represented by the hypertext map. Further, this program provides access to a summary of the model through variable, function, sub routine, activity and resource declarations as well as providing full access to the source model and source code. The use of the tool enables easy access to the declarations and the ability to traverse routines and calls while analyzing the model.

  14. Cell-assembly coding in several memory processes.

    PubMed

    Sakurai, Y

    1998-01-01

    The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.

  15. Self-Taught Low-Rank Coding for Visual Learning.

    PubMed

    Li, Sheng; Li, Kang; Fu, Yun

    2018-03-01

    The lack of labeled data presents a common challenge in many computer vision and machine learning tasks. Semisupervised learning and transfer learning methods have been developed to tackle this challenge by utilizing auxiliary samples from the same domain or from a different domain, respectively. Self-taught learning, which is a special type of transfer learning, has fewer restrictions on the choice of auxiliary data. It has shown promising performance in visual learning. However, existing self-taught learning methods usually ignore the structure information in data. In this paper, we focus on building a self-taught coding framework, which can effectively utilize the rich low-level pattern information abstracted from the auxiliary domain, in order to characterize the high-level structural information in the target domain. By leveraging a high quality dictionary learned across auxiliary and target domains, the proposed approach learns expressive codings for the samples in the target domain. Since many types of visual data have been proven to contain subspace structures, a low-rank constraint is introduced into the coding objective to better characterize the structure of the given target set. The proposed representation learning framework is called self-taught low-rank (S-Low) coding, which can be formulated as a nonconvex rank-minimization and dictionary learning problem. We devise an efficient majorization-minimization augmented Lagrange multiplier algorithm to solve it. Based on the proposed S-Low coding mechanism, both unsupervised and supervised visual learning algorithms are derived. Extensive experiments on five benchmark data sets demonstrate the effectiveness of our approach.

  16. Open-Source Python Tools for Deploying Interactive GIS Dashboards for a Billion Datapoints on a Laptop

    NASA Astrophysics Data System (ADS)

    Steinberg, P. D.; Bednar, J. A.; Rudiger, P.; Stevens, J. L. R.; Ball, C. E.; Christensen, S. D.; Pothina, D.

    2017-12-01

    The rich variety of software libraries available in the Python scientific ecosystem provides a flexible and powerful alternative to traditional integrated GIS (geographic information system) programs. Each such library focuses on doing a certain set of general-purpose tasks well, and Python makes it relatively simple to glue the libraries together to solve a wide range of complex, open-ended problems in Earth science. However, choosing an appropriate set of libraries can be challenging, and it is difficult to predict how much "glue code" will be needed for any particular combination of libraries and tasks. Here we present a set of libraries that have been designed to work well together to build interactive analyses and visualizations of large geographic datasets, in standard web browsers. The resulting workflows run on ordinary laptops even for billions of data points, and easily scale up to larger compute clusters when available. The declarative top-level interface used in these libraries means that even complex, fully interactive applications can be built and deployed as web services using only a few dozen lines of code, making it simple to create and share custom interactive applications even for datasets too large for most traditional GIS systems. The libraries we will cover include GeoViews (HoloViews extended for geographic applications) for declaring visualizable/plottable objects, Bokeh for building visual web applications from GeoViews objects, Datashader for rendering arbitrarily large datasets faithfully as fixed-size images, Param for specifying user-modifiable parameters that model your domain, Xarray for computing with n-dimensional array data, Dask for flexibly dispatching computational tasks across processors, and Numba for compiling array-based Python code down to fast machine code. We will show how to use the resulting workflow with static datasets and with simulators such as GSSHA or AdH, allowing you to deploy flexible, high-performance web-based dashboards for your GIS data or simulations without needing major investments in code development or maintenance.

  17. Explorative visual analytics on interval-based genomic data and their metadata.

    PubMed

    Jalili, Vahid; Matteucci, Matteo; Masseroli, Marco; Ceri, Stefano

    2017-12-04

    With the wide-spreading of public repositories of NGS processed data, the availability of user-friendly and effective tools for data exploration, analysis and visualization is becoming very relevant. These tools enable interactive analytics, an exploratory approach for the seamless "sense-making" of data through on-the-fly integration of analysis and visualization phases, suggested not only for evaluating processing results, but also for designing and adapting NGS data analysis pipelines. This paper presents abstractions for supporting the early analysis of NGS processed data and their implementation in an associated tool, named GenoMetric Space Explorer (GeMSE). This tool serves the needs of the GenoMetric Query Language, an innovative cloud-based system for computing complex queries over heterogeneous processed data. It can also be used starting from any text files in standard BED, BroadPeak, NarrowPeak, GTF, or general tab-delimited format, containing numerical features of genomic regions; metadata can be provided as text files in tab-delimited attribute-value format. GeMSE allows interactive analytics, consisting of on-the-fly cycling among steps of data exploration, analysis and visualization that help biologists and bioinformaticians in making sense of heterogeneous genomic datasets. By means of an explorative interaction support, users can trace past activities and quickly recover their results, seamlessly going backward and forward in the analysis steps and comparative visualizations of heatmaps. GeMSE effective application and practical usefulness is demonstrated through significant use cases of biological interest. GeMSE is available at http://www.bioinformatics.deib.polimi.it/GeMSE/ , and its source code is available at https://github.com/Genometric/GeMSE under GPLv3 open-source license.

  18. Meaning and Identities: A Visual Performative Pedagogy for Socio-Cultural Learning

    ERIC Educational Resources Information Center

    Grushka, Kathryn

    2009-01-01

    In this article I present personalised socio-cultural inquiry in visual art education as a critical and expressive material praxis. The model of "Visual Performative Pedagogy and Communicative Proficiency for the Visual Art Classroom" is presented as a legitimate means of manipulating visual codes, communicating meaning and mediating…

  19. Joint source-channel coding for motion-compensated DCT-based SNR scalable video.

    PubMed

    Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K

    2002-01-01

    In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.

  20. Sparse coding can predict primary visual cortex receptive field changes induced by abnormal visual input.

    PubMed

    Hunt, Jonathan J; Dayan, Peter; Goodhill, Geoffrey J

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.

  1. Sparse Coding Can Predict Primary Visual Cortex Receptive Field Changes Induced by Abnormal Visual Input

    PubMed Central

    Hunt, Jonathan J.; Dayan, Peter; Goodhill, Geoffrey J.

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields. PMID:23675290

  2. Exploring Modality Compatibility in the Response-Effect Compatibility Paradigm.

    PubMed

    Földes, Noémi; Philipp, Andrea M; Badets, Arnaud; Koch, Iring

    2017-01-01

    According to ideomotor theory , action planning is based on anticipatory perceptual representations of action-effects. This aspect of action control has been investigated in studies using the response-effect compatibility (REC) paradigm, in which responses have been shown to be facilitated if ensuing perceptual effects share codes with the response based on dimensional overlap (i.e., REC). Additionally, according to the notion of ideomotor compatibility, certain response-effect (R-E) mappings will be stronger than others because some response features resemble the anticipated sensory response effects more strongly than others (e.g., since vocal responses usually produce auditory effects, an auditory stimulus should be anticipated in a stronger manner following vocal responses rather than following manual responses). Yet, systematic research on this matter is lacking. In the present study, two REC experiments aimed to explore the influence of R-E modality mappings. In Experiment 1, vocal number word responses produced visual effects on the screen (digits vs. number words; i.e., visual-symbolic vs. visual-verbal effect codes). The REC effect was only marginally larger for visual-verbal than for visual-symbolic effects. Using verbal effect codes in Experiment 2, we found that the REC effect was larger with auditory-verbal R-E mapping than with visual-verbal R-E mapping. Overall, the findings support the hypothesis of a role of R-E modality mappings in REC effects, suggesting both further evidence for ideomotor accounts as well as code-specific and modality-specific contributions to effect anticipation.

  3. Functional interplay of top-down attention with affective codes during visual short-term memory maintenance.

    PubMed

    Kuo, Bo-Cheng; Lin, Szu-Hung; Yeh, Yei-Yu

    2018-06-01

    Visual short-term memory (VSTM) allows individuals to briefly maintain information over time for guiding behaviours. Because the contents of VSTM can be neutral or emotional, top-down influence in VSTM may vary with the affective codes of maintained representations. Here we investigated the neural mechanisms underlying the functional interplay of top-down attention with affective codes in VSTM using functional magnetic resonance imaging. Participants were instructed to remember both threatening and neutral objects in a cued VSTM task. Retrospective cues (retro-cues) were presented to direct attention to the hemifield of a threatening object (i.e., cue-to-threat) or a neutral object (i.e., cue-to-neutral) during VSTM maintenance. We showed stronger activity in the ventral occipitotemporal cortex and amygdala for attending threatening relative to neutral representations. Using multivoxel pattern analysis, we found better classification performance for cue-to-threat versus cue-to-neutral objects in early visual areas and in the amygdala. Importantly, retro-cues modulated the strength of functional connectivity between the frontoparietal and early visual areas. Activity in the frontoparietal areas became strongly correlated with the activity in V3a-V4 coding the threatening representations instructed to be relevant for the task. Together, these findings provide the first demonstration of top-down modulation of activation patterns in early visual areas and functional connectivity between the frontoparietal network and early visual areas for regulating threatening representations during VSTM maintenance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. ePlant: Visualizing and Exploring Multiple Levels of Data for Hypothesis Generation in Plant Biology[OPEN

    PubMed Central

    Waese, Jamie; Fan, Jim; Yu, Hans; Fucile, Geoffrey; Shi, Ruian; Cumming, Matthew; Town, Chris; Stuerzlinger, Wolfgang

    2017-01-01

    A big challenge in current systems biology research arises when different types of data must be accessed from separate sources and visualized using separate tools. The high cognitive load required to navigate such a workflow is detrimental to hypothesis generation. Accordingly, there is a need for a robust research platform that incorporates all data and provides integrated search, analysis, and visualization features through a single portal. Here, we present ePlant (http://bar.utoronto.ca/eplant), a visual analytic tool for exploring multiple levels of Arabidopsis thaliana data through a zoomable user interface. ePlant connects to several publicly available web services to download genome, proteome, interactome, transcriptome, and 3D molecular structure data for one or more genes or gene products of interest. Data are displayed with a set of visualization tools that are presented using a conceptual hierarchy from big to small, and many of the tools combine information from more than one data type. We describe the development of ePlant in this article and present several examples illustrating its integrative features for hypothesis generation. We also describe the process of deploying ePlant as an “app” on Araport. Building on readily available web services, the code for ePlant is freely available for any other biological species research. PMID:28808136

  5. Perceiving groups: The people perception of diversity and hierarchy.

    PubMed

    Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L

    2018-05-01

    The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. SIGNUM: A Matlab, TIN-based landscape evolution model

    NASA Astrophysics Data System (ADS)

    Refice, A.; Giachetta, E.; Capolongo, D.

    2012-08-01

    Several numerical landscape evolution models (LEMs) have been developed to date, and many are available as open source codes. Most are written in efficient programming languages such as Fortran or C, but often require additional code efforts to plug in to more user-friendly data analysis and/or visualization tools to ease interpretation and scientific insight. In this paper, we present an effort to port a common core of accepted physical principles governing landscape evolution directly into a high-level language and data analysis environment such as Matlab. SIGNUM (acronym for Simple Integrated Geomorphological Numerical Model) is an independent and self-contained Matlab, TIN-based landscape evolution model, built to simulate topography development at various space and time scales. SIGNUM is presently capable of simulating hillslope processes such as linear and nonlinear diffusion, fluvial incision into bedrock, spatially varying surface uplift which can be used to simulate changes in base level, thrust and faulting, as well as effects of climate changes. Although based on accepted and well-known processes and algorithms in its present version, it is built with a modular structure, which allows to easily modify and upgrade the simulated physical processes to suite virtually any user needs. The code is conceived as an open-source project, and is thus an ideal tool for both research and didactic purposes, thanks to the high-level nature of the Matlab environment and its popularity among the scientific community. In this paper the simulation code is presented together with some simple examples of surface evolution, and guidelines for development of new modules and algorithms are proposed.

  7. Network analysis for the visualization and analysis of qualitative data.

    PubMed

    Pokorny, Jennifer J; Norman, Alex; Zanesco, Anthony P; Bauer-Wu, Susan; Sahdra, Baljinder K; Saron, Clifford D

    2018-03-01

    We present a novel manner in which to visualize the coding of qualitative data that enables representation and analysis of connections between codes using graph theory and network analysis. Network graphs are created from codes applied to a transcript or audio file using the code names and their chronological location. The resulting network is a representation of the coding data that characterizes the interrelations of codes. This approach enables quantification of qualitative codes using network analysis and facilitates examination of associations of network indices with other quantitative variables using common statistical procedures. Here, as a proof of concept, we applied this method to a set of interview transcripts that had been coded in 2 different ways and the resultant network graphs were examined. The creation of network graphs allows researchers an opportunity to view and share their qualitative data in an innovative way that may provide new insights and enhance transparency of the analytical process by which they reach their conclusions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. GeoNotebook: Browser based Interactive analysis and visualization workflow for very large climate and geospatial datasets

    NASA Astrophysics Data System (ADS)

    Ozturk, D.; Chaudhary, A.; Votava, P.; Kotfila, C.

    2016-12-01

    Jointly developed by Kitware and NASA Ames, GeoNotebook is an open source tool designed to give the maximum amount of flexibility to analysts, while dramatically simplifying the process of exploring geospatially indexed datasets. Packages like Fiona (backed by GDAL), Shapely, Descartes, Geopandas, and PySAL provide a stack of technologies for reading, transforming, and analyzing geospatial data. Combined with the Jupyter notebook and libraries like matplotlib/Basemap it is possible to generate detailed geospatial visualizations. Unfortunately, visualizations generated is either static or does not perform well for very large datasets. Also, this setup requires a great deal of boilerplate code to create and maintain. Other extensions exist to remedy these problems, but they provide a separate map for each input cell and do not support map interactions that feed back into the python environment. To support interactive data exploration and visualization on large datasets we have developed an extension to the Jupyter notebook that provides a single dynamic map that can be managed from the Python environment, and that can communicate back with a server which can perform operations like data subsetting on a cloud-based cluster.

  9. Phylo.io: Interactive Viewing and Comparison of Large Phylogenetic Trees on the Web.

    PubMed

    Robinson, Oscar; Dylus, David; Dessimoz, Christophe

    2016-08-01

    Phylogenetic trees are pervasively used to depict evolutionary relationships. Increasingly, researchers need to visualize large trees and compare multiple large trees inferred for the same set of taxa (reflecting uncertainty in the tree inference or genuine discordance among the loci analyzed). Existing tree visualization tools are however not well suited to these tasks. In particular, side-by-side comparison of trees can prove challenging beyond a few dozen taxa. Here, we introduce Phylo.io, a web application to visualize and compare phylogenetic trees side-by-side. Its distinctive features are: highlighting of similarities and differences between two trees, automatic identification of the best matching rooting and leaf order, scalability to large trees, high usability, multiplatform support via standard HTML5 implementation, and possibility to store and share visualizations. The tool can be freely accessed at http://phylo.io and can easily be embedded in other web servers. The code for the associated JavaScript library is available at https://github.com/DessimozLab/phylo-io under an MIT open source license. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  10. The implementation of thermal image visualization by HDL based on pseudo-color

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Zhang, JiangLing

    2004-11-01

    The pseudo-color method which maps the sampled data to intuitive perception colors is a kind of powerful visualization way. And the all-around system of pseudo-color visualization, which includes the primary principle, model and HDL (Hardware Description Language) implementation for the thermal images, is expatiated on in the paper. The thermal images whose signal is modulated as video reflect the temperature distribution of measured object, so they have the speciality of mass and real-time. The solution to the intractable problem is as follows: First, the reasonable system, i.e. the combining of global pseudo-color visualization and local special area accurate measure, muse be adopted. Then, the HDL pseudo-color algorithms in SoC (System on Chip) carry out the system to ensure the real-time. Finally, the key HDL algorithms for direct gray levels connection coding, proportional gray levels map coding and enhanced gray levels map coding are presented, and its simulation results are showed. The pseudo-color visualization of thermal images implemented by HDL in the paper has effective application in the aspect of electric power equipment test and medical health diagnosis.

  11. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential forreducing or removingother artifacts caused by instrument instability, detector non-linearity,etc. An open-source toolbox, which integratesmore » the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.« less

  12. The Astrophysics Source Code Library: An Update

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Nemiroff, R. J.; Shamir, L.; Teuben, P. J.

    2012-01-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, takes an active approach to sharing astrophysical source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL moved to a new location in 2010, and has over 300 codes in it and continues to grow. In 2011, the ASCL (http://asterisk.apod.com/viewforum.php?f=35) has on average added 19 new codes per month; we encourage scientists to submit their codes for inclusion. An advisory committee has been established to provide input and guide the development and expansion of its new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This presentation covers the history of the ASCL and examines the current state and benefits of the ASCL, the means of and requirements for including codes, and outlines its future plans.

  13. Authorship Attribution of Source Code

    ERIC Educational Resources Information Center

    Tennyson, Matthew F.

    2013-01-01

    Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…

  14. Influence of Interpretation Aids on Attentional Capture, Visual Processing, and Understanding of Front-of-Package Nutrition Labels.

    PubMed

    Antúnez, Lucía; Giménez, Ana; Maiche, Alejandro; Ares, Gastón

    2015-01-01

    To study the influence of 2 interpretational aids of front-of-package (FOP) nutrition labels (color code and text descriptors) on attentional capture and consumers' understanding of nutritional information. A full factorial design was used to assess the influence of color code and text descriptors using visual search and eye tracking. Ten trained assessors participated in the visual search study and 54 consumers completed the eye-tracking study. In the visual search study, assessors were asked to indicate whether there was a label high in fat within sets of mayonnaise labels with different FOP labels. In the eye-tracking study, assessors answered a set of questions about the nutritional content of labels. The researchers used logistic regression to evaluate the influence of interpretational aids of FOP nutrition labels on the percentage of correct answers. Analyses of variance were used to evaluate the influence of the studied variables on attentional measures and participants' response times. Response times were significantly higher for monochromatic FOP labels compared with color-coded ones (3,225 vs 964 ms; P < .001), which suggests that color codes increase attentional capture. The highest number and duration of fixations and visits were recorded on labels that did not include color codes or text descriptors (P < .05). The lowest percentage of incorrect answers was observed when the nutrient level was indicated using color code and text descriptors (P < .05). The combination of color codes and text descriptors seems to be the most effective alternative to increase attentional capture and understanding of nutritional information. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  15. Visual Computing Environment

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Putt, Charles W.

    1997-01-01

    The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.

  16. SensA: web-based sensitivity analysis of SBML models.

    PubMed

    Floettmann, Max; Uhlendorf, Jannis; Scharp, Till; Klipp, Edda; Spiesser, Thomas W

    2014-10-01

    SensA is a web-based application for sensitivity analysis of mathematical models. The sensitivity analysis is based on metabolic control analysis, computing the local, global and time-dependent properties of model components. Interactive visualization facilitates interpretation of usually complex results. SensA can contribute to the analysis, adjustment and understanding of mathematical models for dynamic systems. SensA is available at http://gofid.biologie.hu-berlin.de/ and can be used with any modern browser. The source code can be found at https://bitbucket.org/floettma/sensa/ (MIT license) © The Author 2014. Published by Oxford University Press.

  17. KEGGParser: parsing and editing KEGG pathway maps in Matlab.

    PubMed

    Arakelyan, Arsen; Nersisyan, Lilit

    2013-02-15

    KEGG pathway database is a collection of manually drawn pathway maps accompanied with KGML format files intended for use in automatic analysis. KGML files, however, do not contain the required information for complete reproduction of all the events indicated in the static image of a pathway map. Several parsers and editors of KEGG pathways exist for processing KGML files. We introduce KEGGParser-a MATLAB based tool for KEGG pathway parsing, semiautomatic fixing, editing, visualization and analysis in MATLAB environment. It also works with Scilab. The source code is available at http://www.mathworks.com/matlabcentral/fileexchange/37561.

  18. Visual communication with retinex coding.

    PubMed

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  19. Visual Communication with Retinex Coding

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  20. Schroedinger’s code: Source code availability and transparency in astrophysics

    NASA Astrophysics Data System (ADS)

    Ryan, PW; Allen, Alice; Teuben, Peter

    2018-01-01

    Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.

  1. Visual attention mitigates information loss in small- and large-scale neural codes

    PubMed Central

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  2. FastDart : a fast, accurate and friendly version of DART code.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rest, J.; Taboada, H.

    2000-11-08

    A new enhanced, visual version of DART code is presented. DART is a mechanistic model based code, developed for the performance calculation and assessment of aluminum dispersion fuel. Major issues of this new version are the development of a new, time saving calculation routine, able to be run on PC, a friendly visual input interface and a plotting facility. This version, available for silicide and U-Mo fuels,adds to the classical accuracy of DART models for fuel performance prediction, a faster execution and visual interfaces. It is part of a collaboration agreement between ANL and CNEA in the area of Lowmore » Enriched Uranium Advanced Fuels, held by the Implementation Arrangement for Technical Exchange and Cooperation in the Area of Peaceful Uses of Nuclear Energy.« less

  3. Federated Giovanni: A Distributed Web Service for Analysis and Visualization of Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris

    2014-01-01

    The Geospatial Interactive Online Visualization and Analysis Interface (Giovanni) is a popular tool for users of the Goddard Earth Sciences Data and Information Services Center (GES DISC) and has been in use for over a decade. It provides a wide variety of algorithms and visualizations to explore large remote sensing datasets without having to download the data and without having to write readers and visualizers for it. Giovanni is now being extended to enable its capabilities at other data centers within the Earth Observing System Data and Information System (EOSDIS). This Federated Giovanni will allow four other data centers to add and maintain their data within Giovanni on behalf of their user community. Those data centers are the Physical Oceanography Distributed Active Archive Center (PO.DAAC), MODIS Adaptive Processing System (MODAPS), Ocean Biology Processing Group (OBPG), and Land Processes Distributed Active Archive Center (LP DAAC). Three tiers are supported: Tier 1 (GES DISC-hosted) gives the remote data center a data management interface to add and maintain data, which are provided through the Giovanni instance at the GES DISC. Tier 2 packages Giovanni up as a virtual machine for distribution to and deployment by the other data centers. Data variables are shared among data centers by sharing documents from the Solr database that underpins Giovanni's data management capabilities. However, each data center maintains their own instance of Giovanni, exposing the variables of most interest to their user community. Tier 3 is a Shared Source model, in which the data centers cooperate to extend the infrastructure by contributing source code.

  4. Modernized Approach for Generating Reproducible Heterogeneity Using Transmitted-Light for Flow Visualization Experiments

    NASA Astrophysics Data System (ADS)

    Jones, A. A.; Holt, R. M.

    2017-12-01

    Image capturing in flow experiments has been used for fluid mechanics research since the early 1970s. Interactions of fluid flow between the vadose zone and permanent water table are of great interest because this zone is responsible for all recharge waters, pollutant transport and irrigation efficiency for agriculture. Griffith, et al. (2011) developed an approach where constructed reproducible "geologically realistic" sand configurations are deposited in sandfilled experimental chambers for light-transmitted flow visualization experiments. This method creates reproducible, reverse graded, layered (stratified) thin-slab sand chambers for point source experiments visualizing multiphase flow through porous media. Reverse-graded stratification of sand chambers mimic many naturally occurring sedimentary deposits. Sandfilled chambers use light as nonintrusive tools for measuring water saturation in two-dimensions (2-D). Homogeneous and heterogeneous sand configurations can be produced to visualize the complex physics of the unsaturated zone. The experimental procedure developed by Griffith, et al. (2011) was designed using now outdated and obsolete equipment. We have modernized this approach with new Parker Deadel linear actuator and programed projects/code for multiple configurations. We have also updated the Roper CCD software and image processing software with the latest in industry standards. Modernization of transmitted-light source, robotic equipment, redesigned experimental chambers, and newly developed analytical procedures have greatly reduced time and cost per experiment. We have verified the ability of the new equipment to generate reproducible heterogeneous sand-filled chambers and demonstrated the functionality of the new equipment and procedures by reproducing several gravity-driven fingering experiments conducted by Griffith (2008).

  5. JCoDA: a tool for detecting evolutionary selection.

    PubMed

    Steinway, Steven N; Dannenfelser, Ruth; Laucius, Christopher D; Hayes, James E; Nayak, Sudhir

    2010-05-27

    The incorporation of annotated sequence information from multiple related species in commonly used databases (Ensembl, Flybase, Saccharomyces Genome Database, Wormbase, etc.) has increased dramatically over the last few years. This influx of information has provided a considerable amount of raw material for evaluation of evolutionary relationships. To aid in the process, we have developed JCoDA (Java Codon Delimited Alignment) as a simple-to-use visualization tool for the detection of site specific and regional positive/negative evolutionary selection amongst homologous coding sequences. JCoDA accepts user-inputted unaligned or pre-aligned coding sequences, performs a codon-delimited alignment using ClustalW, and determines the dN/dS calculations using PAML (Phylogenetic Analysis Using Maximum Likelihood, yn00 and codeml) in order to identify regions and sites under evolutionary selection. The JCoDA package includes a graphical interface for Phylip (Phylogeny Inference Package) to generate phylogenetic trees, manages formatting of all required file types, and streamlines passage of information between underlying programs. The raw data are output to user configurable graphs with sliding window options for straightforward visualization of pairwise or gene family comparisons. Additionally, codon-delimited alignments are output in a variety of common formats and all dN/dS calculations can be output in comma-separated value (CSV) format for downstream analysis. To illustrate the types of analyses that are facilitated by JCoDA, we have taken advantage of the well studied sex determination pathway in nematodes as well as the extensive sequence information available to identify genes under positive selection, examples of regional positive selection, and differences in selection based on the role of genes in the sex determination pathway. JCoDA is a configurable, open source, user-friendly visualization tool for performing evolutionary analysis on homologous coding sequences. JCoDA can be used to rapidly screen for genes and regions of genes under selection using PAML. It can be freely downloaded at http://www.tcnj.edu/~nayaklab/jcoda.

  6. JCoDA: a tool for detecting evolutionary selection

    PubMed Central

    2010-01-01

    Background The incorporation of annotated sequence information from multiple related species in commonly used databases (Ensembl, Flybase, Saccharomyces Genome Database, Wormbase, etc.) has increased dramatically over the last few years. This influx of information has provided a considerable amount of raw material for evaluation of evolutionary relationships. To aid in the process, we have developed JCoDA (Java Codon Delimited Alignment) as a simple-to-use visualization tool for the detection of site specific and regional positive/negative evolutionary selection amongst homologous coding sequences. Results JCoDA accepts user-inputted unaligned or pre-aligned coding sequences, performs a codon-delimited alignment using ClustalW, and determines the dN/dS calculations using PAML (Phylogenetic Analysis Using Maximum Likelihood, yn00 and codeml) in order to identify regions and sites under evolutionary selection. The JCoDA package includes a graphical interface for Phylip (Phylogeny Inference Package) to generate phylogenetic trees, manages formatting of all required file types, and streamlines passage of information between underlying programs. The raw data are output to user configurable graphs with sliding window options for straightforward visualization of pairwise or gene family comparisons. Additionally, codon-delimited alignments are output in a variety of common formats and all dN/dS calculations can be output in comma-separated value (CSV) format for downstream analysis. To illustrate the types of analyses that are facilitated by JCoDA, we have taken advantage of the well studied sex determination pathway in nematodes as well as the extensive sequence information available to identify genes under positive selection, examples of regional positive selection, and differences in selection based on the role of genes in the sex determination pathway. Conclusions JCoDA is a configurable, open source, user-friendly visualization tool for performing evolutionary analysis on homologous coding sequences. JCoDA can be used to rapidly screen for genes and regions of genes under selection using PAML. It can be freely downloaded at http://www.tcnj.edu/~nayaklab/jcoda. PMID:20507581

  7. Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization

    DTIC Science & Technology

    2009-01-01

    Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding

  8. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  9. Representations of temporal information in short-term memory: Are they modality-specific?

    PubMed

    Bratzke, Daniel; Quinn, Katrina R; Ulrich, Rolf; Bausenhart, Karin M

    2016-10-01

    Rattat and Picard (2012) reported that the coding of temporal information in short-term memory is modality-specific, that is, temporal information received via the visual (auditory) modality is stored as a visual (auditory) code. This conclusion was supported by modality-specific interference effects on visual and auditory duration discrimination, which were induced by secondary tasks (visual tracking or articulatory suppression), presented during a retention interval. The present study assessed the stability of these modality-specific interference effects. Our study did not replicate the selective interference pattern but rather indicated that articulatory suppression not only impairs short-term memory for auditory but also for visual durations. This result pattern supports a crossmodal or an abstract view of temporal encoding. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. ShelXle: a Qt graphical user interface for SHELXL.

    PubMed

    Hübschle, Christian B; Sheldrick, George M; Dittrich, Birger

    2011-12-01

    ShelXle is a graphical user interface for SHELXL [Sheldrick, G. M. (2008). Acta Cryst. A64, 112-122], currently the most widely used program for small-molecule structure refinement. It combines an editor with syntax highlighting for the SHELXL-associated .ins (input) and .res (output) files with an interactive graphical display for visualization of a three-dimensional structure including the electron density (F(o)) and difference density (F(o)-F(c)) maps. Special features of ShelXle include intuitive atom (re-)naming, a strongly coupled editor, structure visualization in various mono and stereo modes, and a novel way of displaying disorder extending over special positions. ShelXle is completely compatible with all features of SHELXL and is written entirely in C++ using the Qt4 and FFTW libraries. It is available at no cost for Windows, Linux and Mac-OS X and as source code.

  11. Geoinformatics in the public service: building a cyberinfrastructure across the geological surveys

    USGS Publications Warehouse

    Allison, M. Lee; Gundersen, Linda C.; Richard, Stephen M.; Keller, G. Randy; Baru, Chaitanya

    2011-01-01

    Advanced information technology infrastructure is increasingly being employed in the Earth sciences to provide researchers with efficient access to massive central databases and to integrate diversely formatted information from a variety of sources. These geoinformatics initiatives enable manipulation, modeling and visualization of data in a consistent way, and are helping to develop integrated Earth models at various scales, and from the near surface to the deep interior. This book uses a series of case studies to demonstrate computer and database use across the geosciences. Chapters are thematically grouped into sections that cover data collection and management; modeling and community computational codes; visualization and data representation; knowledge management and data integration; and web services and scientific workflows. Geoinformatics is a fascinating and accessible introduction to this emerging field for readers across the solid Earth sciences and an invaluable reference for researchers interested in initiating new cyberinfrastructure projects of their own.

  12. A conflict-based model of color categorical perception: evidence from a priming study.

    PubMed

    Hu, Zhonghua; Hanley, J Richard; Zhang, Ruiling; Liu, Qiang; Roberson, Debi

    2014-10-01

    Categorical perception (CP) of color manifests as faster or more accurate discrimination of two shades of color that straddle a category boundary (e.g., one blue and one green) than of two shades from within the same category (e.g., two different shades of green), even when the differences between the pairs of colors are equated according to some objective metric. The results of two experiments provide new evidence for a conflict-based account of this effect, in which CP is caused by competition between visual and verbal/categorical codes on within-category trials. According to this view, conflict arises because the verbal code indicates that the two colors are the same, whereas the visual code indicates that they are different. In Experiment 1, two shades from the same color category were discriminated significantly faster when the previous trial also comprised a pair of within-category colors than when the previous trial comprised a pair from two different color categories. Under the former circumstances, the CP effect disappeared. According to the conflict-based model, response conflict between visual and categorical codes during discrimination of within-category pairs produced an adjustment of cognitive control that reduced the weight given to the categorical code relative to the visual code on the subsequent trial. Consequently, responses on within-category trials were facilitated, and CP effects were reduced. The effectiveness of this conflict-based account was evaluated in comparison with an alternative view that CP reflects temporary warping of perceptual space at the boundaries between color categories.

  13. Neurovascular Network Explorer 1.0: a database of 2-photon single-vessel diameter measurements with MATLAB(®) graphical user interface.

    PubMed

    Sridhar, Vishnu B; Tian, Peifang; Dale, Anders M; Devor, Anna; Saisan, Payam A

    2014-01-01

    We present a database client software-Neurovascular Network Explorer 1.0 (NNE 1.0)-that uses MATLAB(®) based Graphical User Interface (GUI) for interaction with a database of 2-photon single-vessel diameter measurements from our previous publication (Tian et al., 2010). These data are of particular interest for modeling the hemodynamic response. NNE 1.0 is downloaded by the user and then runs either as a MATLAB script or as a standalone program on a Windows platform. The GUI allows browsing the database according to parameters specified by the user, simple manipulation and visualization of the retrieved records (such as averaging and peak-normalization), and export of the results. Further, we provide NNE 1.0 source code. With this source code, the user can database their own experimental results, given the appropriate data structure and naming conventions, and thus share their data in a user-friendly format with other investigators. NNE 1.0 provides an example of seamless and low-cost solution for sharing of experimental data by a regular size neuroscience laboratory and may serve as a general template, facilitating dissemination of biological results and accelerating data-driven modeling approaches.

  14. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    ERIC Educational Resources Information Center

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  15. Statistical regularities in art: Relations with visual coding and perception.

    PubMed

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.

  16. deepTools2: a next generation web server for deep-sequencing data analysis.

    PubMed

    Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas

    2016-07-08

    We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Computer programming for generating visual stimuli.

    PubMed

    Bukhari, Farhan; Kurylo, Daniel D

    2008-02-01

    Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.

  18. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  19. A sublethal dose of a neonicotinoid insecticide disrupts visual processing and collision avoidance behaviour in Locusta migratoria.

    PubMed

    Parkinson, Rachel H; Little, Jacelyn M; Gray, John R

    2017-04-20

    Neonicotinoids are known to affect insect navigation and vision, however the mechanisms of these effects are not fully understood. A visual motion sensitive neuron in the locust, the Descending Contralateral Movement Detector (DCMD), integrates visual information and is involved in eliciting escape behaviours. The DCMD receives coded input from the compound eyes and monosynaptically excites motorneurons involved in flight and jumping. We show that imidacloprid (IMD) impairs neural responses to visual stimuli at sublethal concentrations, and these effects are sustained two and twenty-four hours after treatment. Most significantly, IMD disrupted bursting, a coding property important for motion detection. Specifically, IMD reduced the DCMD peak firing rate within bursts at ecologically relevant doses of 10 ng/g (ng IMD per g locust body weight). Effects on DCMD firing translate to deficits in collision avoidance behaviours: exposure to 10 ng/g IMD attenuates escape manoeuvers while 100 ng/g IMD prevents the ability to fly and walk. We show that, at ecologically-relevant doses, IMD causes significant and lasting impairment of an important pathway involved with visual sensory coding and escape behaviours. These results show, for the first time, that a neonicotinoid pesticide directly impairs an important, taxonomically conserved, motion-sensitive visual network.

  20. Advantages and Disadvantages in Image Processing with Free Software in Radiology.

    PubMed

    Mujika, Katrin Muradas; Méndez, Juan Antonio Juanes; de Miguel, Andrés Framiñan

    2018-01-15

    Currently, there are sophisticated applications that make it possible to visualize medical images and even to manipulate them. These software applications are of great interest, both from a teaching and a radiological perspective. In addition, some of these applications are known as Free Open Source Software because they are free and the source code is freely available, and therefore it can be easily obtained even on personal computers. Two examples of free open source software are Osirix Lite® and 3D Slicer®. However, this last group of free applications have limitations in its use. For the radiological field, manipulating and post-processing images is increasingly important. Consequently, sophisticated computing tools that combine software and hardware to process medical images are needed. In radiology, graphic workstations allow their users to process, review, analyse, communicate and exchange multidimensional digital images acquired with different image-capturing radiological devices. These radiological devices are basically CT (Computerised Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), etc. Nevertheless, the programs included in these workstations have a high cost which always depends on the software provider and is always subject to its norms and requirements. With this study, we aim to present the advantages and disadvantages of these radiological image visualization systems in the advanced management of radiological studies. We will compare the features of the VITREA2® and AW VolumeShare 5® radiology workstation with free open source software applications like OsiriX® and 3D Slicer®, with examples from specific studies.

  1. Semantic and phonological coding in poor and normal readers.

    PubMed

    Vellutino, F R; Scanlon, D M; Spearing, D

    1995-02-01

    Three studies were conducted evaluating semantic and phonological coding deficits as alternative explanations of reading disability. In the first study, poor and normal readers in second and sixth grade were compared on various tests evaluating semantic development as well as on tests evaluating rapid naming and pseudoword decoding as independent measures of phonological coding ability. In a second study, the same subjects were given verbal memory and visual-verbal learning tasks using high and low meaning words as verbal stimuli and Chinese ideographs as visual stimuli. On the semantic tasks, poor readers performed below the level of the normal readers only at the sixth grade level, but, on the rapid naming and pseudoword learning tasks, they performed below the normal readers at the second as well as at the sixth grade level. On both the verbal memory and visual-verbal learning tasks, performance in poor readers approximated that of normal readers when the word stimuli were high in meaning but not when they were low in meaning. These patterns were essentially replicated in a third study that used some of the same semantic and phonological measures used in the first experiment, and verbal memory and visual-verbal learning tasks that employed word lists and visual stimuli (novel alphabetic characters) that more closely approximated those used in learning to read. It was concluded that semantic coding deficits are an unlikely cause of reading difficulties in most poor readers at the beginning stages of reading skills acquisition, but accrue as a consequence of prolonged reading difficulties in older readers. It was also concluded that phonological coding deficits are a probable cause of reading difficulties in most poor readers.

  2. Pulse sequence programming in a dynamic visual environment: SequenceTree.

    PubMed

    Magland, Jeremy F; Li, Cheng; Langham, Michael C; Wehrli, Felix W

    2016-01-01

    To describe SequenceTree, an open source, integrated software environment for implementing MRI pulse sequences and, ideally, exporting them to actual MRI scanners. The software is a user-friendly alternative to vendor-supplied pulse sequence design and editing tools and is suited for programmers and nonprogrammers alike. The integrated user interface was programmed using the Qt4/C++ toolkit. As parameters and code are modified, the pulse sequence diagram is automatically updated within the user interface. Several aspects of pulse programming are handled automatically, allowing users to focus on higher-level aspects of sequence design. Sequences can be simulated using a built-in Bloch equation solver and then exported for use on a Siemens MRI scanner. Ideally, other types of scanners will be supported in the future. SequenceTree has been used for 8 years in our laboratory and elsewhere and has contributed to more than 50 peer-reviewed publications in areas such as cardiovascular imaging, solid state and nonproton NMR, MR elastography, and high-resolution structural imaging. SequenceTree is an innovative, open source, visual pulse sequence environment for MRI combining simplicity with flexibility and is ideal both for advanced users and users with limited programming experience. © 2015 Wiley Periodicals, Inc.

  3. QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks.

    PubMed

    Thibodeau, Asa; Márquez, Eladio J; Luo, Oscar; Ruan, Yijun; Menghi, Francesca; Shin, Dong-Guk; Stitzel, Michael L; Vera-Licona, Paola; Ucar, Duygu

    2016-06-01

    Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. QuIN's web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.

  4. Four Year-Olds Use Norm-Based Coding for Face Identity

    ERIC Educational Resources Information Center

    Jeffery, Linda; Read, Ainsley; Rhodes, Gillian

    2013-01-01

    Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged…

  5. A Fuzzy-Based Approach for Sensing, Coding and Transmission Configuration of Visual Sensors in Smart City Applications

    PubMed Central

    Costa, Daniel G.; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian

    2017-01-01

    The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field. PMID:28067777

  6. A Fuzzy-Based Approach for Sensing, Coding and Transmission Configuration of Visual Sensors in Smart City Applications.

    PubMed

    Costa, Daniel G; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian

    2017-01-05

    The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field.

  7. PRISM: An open source framework for the interactive design of GPU volume rendering shaders.

    PubMed

    Drouin, Simon; Collins, D Louis

    2018-01-01

    Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel.

  8. PRISM: An open source framework for the interactive design of GPU volume rendering shaders

    PubMed Central

    Collins, D. Louis

    2018-01-01

    Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel. PMID:29534069

  9. Open source tracking and analysis of adult Drosophila locomotion in Buridan's paradigm with and without visual targets.

    PubMed

    Colomb, Julien; Reiter, Lutz; Blaszkiewicz, Jedrzej; Wessnitzer, Jan; Brembs, Bjoern

    2012-01-01

    Insects have been among the most widely used model systems for studying the control of locomotion by nervous systems. In Drosophila, we implemented a simple test for locomotion: in Buridan's paradigm, flies walk back and forth between two inaccessible visual targets [1]. Until today, the lack of easily accessible tools for tracking the fly position and analyzing its trajectory has probably contributed to the slow acceptance of Buridan's paradigm. We present here a package of open source software designed to track a single animal walking in a homogenous environment (Buritrack) and to analyze its trajectory. The Centroid Trajectory Analysis (CeTrAn) software is coded in the open source statistics project R. It extracts eleven metrics and includes correlation analyses and a Principal Components Analysis (PCA). It was designed to be easily customized to personal requirements. In combination with inexpensive hardware, these tools can readily be used for teaching and research purposes. We demonstrate the capabilities of our package by measuring the locomotor behavior of adult Drosophila melanogaster (whose wings were clipped), either in the presence or in the absence of visual targets, and comparing the latter to different computer-generated data. The analysis of the trajectories confirms that flies are centrophobic and shows that inaccessible visual targets can alter the orientation of the flies without changing their overall patterns of activity. Using computer generated data, the analysis software was tested, and chance values for some metrics (as well as chance value for their correlation) were set. Our results prompt the hypothesis that fixation behavior is observed only if negative phototaxis can overcome the propensity of the flies to avoid the center of the platform. Together with our companion paper, we provide new tools to promote Open Science as well as the collection and analysis of digital behavioral data.

  10. Operational rate-distortion performance for joint source and channel coding of images.

    PubMed

    Ruf, M J; Modestino, J W

    1999-01-01

    This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.

  11. Visual attention mitigates information loss in small- and large-scale neural codes.

    PubMed

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-04-01

    The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Simulator platform for fast reactor operation and safety technology demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, R. B.; Park, Y. S.; Grandy, C.

    2012-07-30

    A simulator platform for visualization and demonstration of innovative concepts in fast reactor technology is described. The objective is to make more accessible the workings of fast reactor technology innovations and to do so in a human factors environment that uses state-of-the art visualization technologies. In this work the computer codes in use at Argonne National Laboratory (ANL) for the design of fast reactor systems are being integrated to run on this platform. This includes linking reactor systems codes with mechanical structures codes and using advanced graphics to depict the thermo-hydraulic-structure interactions that give rise to an inherently safe responsemore » to upsets. It also includes visualization of mechanical systems operation including advanced concepts that make use of robotics for operations, in-service inspection, and maintenance.« less

  13. 47 CFR 11.61 - Tests of EAS procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... EAS header codes, Attention Signal, Test Script and EOM code. (i) Tests in odd numbered months shall... substitute for a monthly test, activation must include transmission of the EAS header codes, Attention Signal, emergency message and EOM code and comply with the visual message requirements in § 11.51. To substitute for...

  14. Short-term retention of pictures and words: evidence for dual coding systems.

    PubMed

    Pellegrino, J W; Siegel, A W; Dhawan, M

    1975-03-01

    The recall of picture and word triads was examined in three experiments that manipulated the type of distraction in a Brown-Peterson short-term retention task. In all three experiments recall of pictures was superior to words under auditory distraction conditions. Visual distraction produced high performance levels with both types of stimuli, whereas combined auditory and visual distraction significantly reduced picture recall without further affecting word recall. The results were interpreted in terms of the dual coding hypothesis and indicated that pictures are encoded into separate visual and acoustic processing systems while words are primarily acoustically encoded.

  15. Experimental Validation of an Ion Beam Optics Code with a Visualized Ion Thruster

    NASA Astrophysics Data System (ADS)

    Nakayama, Yoshinori; Nakano, Masakatsu

    For validation of an ion beam optics code, the behavior of ion beam optics was experimentally observed and evaluated with a two-dimensional visualized ion thruster (VIT). Since the observed beam focus positions, sheath positions and measured ion beam currents were in good agreement with the numerical results, it was confirmed that the numerical model of this code was appropriated. In addition, it was also confirmed that the beam focus position was moved on center axis of grid hole according to the applied grid potentials, which differs from conventional understanding/assumption. The VIT operations may be useful not only for the validation of ion beam optics codes but also for the fundamental and intuitive understanding of the Child Law Sheath theory.

  16. Effects of isoflurane anesthesia on ensemble patterns of Ca2+ activity in mouse v1: reduced direction selectivity independent of increased correlations in cellular activity.

    PubMed

    Goltstein, Pieter M; Montijn, Jorrit S; Pennartz, Cyriel M A

    2015-01-01

    Anesthesia affects brain activity at the molecular, neuronal and network level, but it is not well-understood how tuning properties of sensory neurons and network connectivity change under its influence. Using in vivo two-photon calcium imaging we matched neuron identity across episodes of wakefulness and anesthesia in the same mouse and recorded spontaneous and visually evoked activity patterns of neuronal ensembles in these two states. Correlations in spontaneous patterns of calcium activity between pairs of neurons were increased under anesthesia. While orientation selectivity remained unaffected by anesthesia, this treatment reduced direction selectivity, which was attributable to an increased response to the null-direction. As compared to anesthesia, populations of V1 neurons coded more mutual information on opposite stimulus directions during wakefulness, whereas information on stimulus orientation differences was lower. Increases in correlations of calcium activity during visual stimulation were correlated with poorer population coding, which raised the hypothesis that the anesthesia-induced increase in correlations may be causal to degrading directional coding. Visual stimulation under anesthesia, however, decorrelated ongoing activity patterns to a level comparable to wakefulness. Because visual stimulation thus appears to 'break' the strength of pairwise correlations normally found in spontaneous activity under anesthesia, the changes in correlational structure cannot explain the awake-anesthesia difference in direction coding. The population-wide decrease in coding for stimulus direction thus occurs independently of anesthesia-induced increments in correlations of spontaneous activity.

  17. Effects of Isoflurane Anesthesia on Ensemble Patterns of Ca2+ Activity in Mouse V1: Reduced Direction Selectivity Independent of Increased Correlations in Cellular Activity

    PubMed Central

    Goltstein, Pieter M.; Montijn, Jorrit S.; Pennartz, Cyriel M. A.

    2015-01-01

    Anesthesia affects brain activity at the molecular, neuronal and network level, but it is not well-understood how tuning properties of sensory neurons and network connectivity change under its influence. Using in vivo two-photon calcium imaging we matched neuron identity across episodes of wakefulness and anesthesia in the same mouse and recorded spontaneous and visually evoked activity patterns of neuronal ensembles in these two states. Correlations in spontaneous patterns of calcium activity between pairs of neurons were increased under anesthesia. While orientation selectivity remained unaffected by anesthesia, this treatment reduced direction selectivity, which was attributable to an increased response to the null-direction. As compared to anesthesia, populations of V1 neurons coded more mutual information on opposite stimulus directions during wakefulness, whereas information on stimulus orientation differences was lower. Increases in correlations of calcium activity during visual stimulation were correlated with poorer population coding, which raised the hypothesis that the anesthesia-induced increase in correlations may be causal to degrading directional coding. Visual stimulation under anesthesia, however, decorrelated ongoing activity patterns to a level comparable to wakefulness. Because visual stimulation thus appears to ‘break’ the strength of pairwise correlations normally found in spontaneous activity under anesthesia, the changes in correlational structure cannot explain the awake-anesthesia difference in direction coding. The population-wide decrease in coding for stimulus direction thus occurs independently of anesthesia-induced increments in correlations of spontaneous activity. PMID:25706867

  18. Image gathering and coding for digital restoration: Information efficiency and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; John, Sarah; Mccormick, Judith A.; Narayanswamy, Ramkumar

    1989-01-01

    Image gathering and coding are commonly treated as tasks separate from each other and from the digital processing used to restore and enhance the images. The goal is to develop a method that allows us to assess quantitatively the combined performance of image gathering and coding for the digital restoration of images with high visual quality. Digital restoration is often interactive because visual quality depends on perceptual rather than mathematical considerations, and these considerations vary with the target, the application, and the observer. The approach is based on the theoretical treatment of image gathering as a communication channel (J. Opt. Soc. Am. A2, 1644(1985);5,285(1988). Initial results suggest that the practical upper limit of the information contained in the acquired image data range typically from approximately 2 to 4 binary information units (bifs) per sample, depending on the design of the image-gathering system. The associated information efficiency of the transmitted data (i.e., the ratio of information over data) ranges typically from approximately 0.3 to 0.5 bif per bit without coding to approximately 0.5 to 0.9 bif per bit with lossless predictive compression and Huffman coding. The visual quality that can be attained with interactive image restoration improves perceptibly as the available information increases to approximately 3 bifs per sample. However, the perceptual improvements that can be attained with further increases in information are very subtle and depend on the target and the desired enhancement.

  19. Continuation of research into language concepts for the mission support environment: Source code

    NASA Technical Reports Server (NTRS)

    Barton, Timothy J.; Ratner, Jeremiah M.

    1991-01-01

    Research into language concepts for the Mission Control Center is presented. A computer code for source codes is presented. The file contains the routines which allow source code files to be created and compiled. The build process assumes that all elements and the COMP exist in the current directory. The build process places as much code generation as possible on the preprocessor as possible. A summary is given of the source files as used and/or manipulated by the build routine.

  20. Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells

    PubMed Central

    Dähne, Sven; Wilbert, Niko; Wiskott, Laurenz

    2014-01-01

    The developing visual system of many mammalian species is partially structured and organized even before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal structuring processes. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in primary visual cortex (V1). Here we present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA), to a biologically plausible model of retinal waves. Previously, SFA has been successfully applied to model parts of the visual system, most notably in reproducing a rich set of complex-cell features by training SFA with quasi-natural image sequences. In the present work, we obtain SFA units that share a number of properties with cortical complex-cells by training on simulated retinal waves. The emergence of two distinct properties of the SFA units (phase invariance and orientation tuning) is thoroughly investigated via control experiments and mathematical analysis of the input-output functions found by SFA. The results support the idea that retinal waves share relevant temporal and spatial properties with natural visual input. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape the developing early visual system such that it is best prepared for coding input from the natural world. PMID:24810948

  1. Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Meadows, Steven

    1997-10-01

    Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.

  2. Video transmission on ATM networks. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung

    1993-01-01

    The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.

  3. The role of visual imagery in the retention of information from sentences.

    PubMed

    Drose, G S; Allen, G L

    1994-01-01

    We conducted two experiments to evaluate a multiple-code model for sentence memory that posits both propositional and visual representational systems. Both sentences involved recognition memory. The results of Experiment 1 indicated that subjects' recognition memory for concrete sentences was superior to their recognition memory for abstract sentences. Instructions to use visual imagery to enhance recognition performance yielded no effects. Experiment 2 tested the prediction that interference by a visual task would differentially affect recognition memory for concrete sentences. Results showed the interference task to have had a detrimental effect on recognition memory for both concrete and abstract sentences. Overall, the evidence provided partial support for both a multiple-code model and a semantic integration model of sentence memory.

  4. Visual communication - Information and fidelity. [of images

    NASA Technical Reports Server (NTRS)

    Huck, Freidrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur; Reichenbach, Stephen E.

    1993-01-01

    This assessment of visual communication deals with image gathering, coding, and restoration as a whole rather than as separate and independent tasks. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image. Past applications of these criteria to the assessment of image coding and restoration have been limited to the link that connects the output of the image-gathering device to the input of the image-display device. By contrast, the approach presented in this paper explicitly includes the critical limiting factors that constrain image gathering and display. This extension leads to an end-to-end assessment theory of visual communication that combines optical design with digital processing.

  5. Measuring diagnoses: ICD code accuracy.

    PubMed

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-10-01

    To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.

  6. A Theory of False Cognitive Expectancies in Airline Pilots

    NASA Astrophysics Data System (ADS)

    Cortes, Antonio I.

    The Theory of False Cognitive Expectancies was developed by studying high reliability flight operations. Airline pilots depend extensively on cognitive expectancies to perceive, understand, and predict actions and events. Out of 1,363 incident reports submitted by airline pilots to the National Aeronautics and Space Administration Aviation Safety Reporting System over a year's time, 110 reports were found to contain evidence of 127 false cognitive expectancies in pilots. A comprehensive taxonomy was developed with six categories of interest. The dataset of 127 false expectancies was used to initially code tentative taxon values for each category. Intermediate coding through constant comparative analysis completed the taxonomy. The taxonomy was used for the advanced coding of chronological context-dependent visualizations of expectancy factors, known as strands, which depict the major factors in the creation and propagation of each expectancy. Strands were mapped into common networks to detect highly represented expectancy processes. Theoretical integration established 11 sources of false expectancies, the most common expectancy errors, and those conspicuous factors worthy of future study. The most prevalent source of false cognitive expectancies within the dataset was determined to be unconscious individual modeling based on past events. Integrative analyses also revealed relationships between expectancies and flight deck automation, unresolved discrepancies, and levels of situation awareness. Particularly noteworthy were the findings that false expectancies can combine in three possible permutations to diminish situation awareness and examples of how false expectancies can be unwittingly transmitted from one person to another. The theory resulting from this research can enhance the error coding process used during aircraft line oriented safety audits, lays the foundation for developing expectancy management training programs, and will allow researchers to proffer hypotheses for human testing using flight simulators.

  7. MEG and EEG data analysis with MNE-Python.

    PubMed

    Gramfort, Alexandre; Luessi, Martin; Larson, Eric; Engemann, Denis A; Strohmeier, Daniel; Brodbeck, Christian; Goj, Roman; Jas, Mainak; Brooks, Teon; Parkkonen, Lauri; Hämäläinen, Matti

    2013-12-26

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE software suite, MNE-Python is an open-source software package that addresses this challenge by providing state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions. All algorithms and utility functions are implemented in a consistent manner with well-documented interfaces, enabling users to create M/EEG data analysis pipelines by writing Python scripts. Moreover, MNE-Python is tightly integrated with the core Python libraries for scientific comptutation (NumPy, SciPy) and visualization (matplotlib and Mayavi), as well as the greater neuroimaging ecosystem in Python via the Nibabel package. The code is provided under the new BSD license allowing code reuse, even in commercial products. Although MNE-Python has only been under heavy development for a couple of years, it has rapidly evolved with expanded analysis capabilities and pedagogical tutorials because multiple labs have collaborated during code development to help share best practices. MNE-Python also gives easy access to preprocessed datasets, helping users to get started quickly and facilitating reproducibility of methods by other researchers. Full documentation, including dozens of examples, is available at http://martinos.org/mne.

  8. MEG and EEG data analysis with MNE-Python

    PubMed Central

    Gramfort, Alexandre; Luessi, Martin; Larson, Eric; Engemann, Denis A.; Strohmeier, Daniel; Brodbeck, Christian; Goj, Roman; Jas, Mainak; Brooks, Teon; Parkkonen, Lauri; Hämäläinen, Matti

    2013-01-01

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE software suite, MNE-Python is an open-source software package that addresses this challenge by providing state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions. All algorithms and utility functions are implemented in a consistent manner with well-documented interfaces, enabling users to create M/EEG data analysis pipelines by writing Python scripts. Moreover, MNE-Python is tightly integrated with the core Python libraries for scientific comptutation (NumPy, SciPy) and visualization (matplotlib and Mayavi), as well as the greater neuroimaging ecosystem in Python via the Nibabel package. The code is provided under the new BSD license allowing code reuse, even in commercial products. Although MNE-Python has only been under heavy development for a couple of years, it has rapidly evolved with expanded analysis capabilities and pedagogical tutorials because multiple labs have collaborated during code development to help share best practices. MNE-Python also gives easy access to preprocessed datasets, helping users to get started quickly and facilitating reproducibility of methods by other researchers. Full documentation, including dozens of examples, is available at http://martinos.org/mne. PMID:24431986

  9. Influence of auditory and audiovisual stimuli on the right-left prevalence effect.

    PubMed

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.

  10. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    PubMed

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  11. ePlant: Visualizing and Exploring Multiple Levels of Data for Hypothesis Generation in Plant Biology.

    PubMed

    Waese, Jamie; Fan, Jim; Pasha, Asher; Yu, Hans; Fucile, Geoffrey; Shi, Ruian; Cumming, Matthew; Kelley, Lawrence A; Sternberg, Michael J; Krishnakumar, Vivek; Ferlanti, Erik; Miller, Jason; Town, Chris; Stuerzlinger, Wolfgang; Provart, Nicholas J

    2017-08-01

    A big challenge in current systems biology research arises when different types of data must be accessed from separate sources and visualized using separate tools. The high cognitive load required to navigate such a workflow is detrimental to hypothesis generation. Accordingly, there is a need for a robust research platform that incorporates all data and provides integrated search, analysis, and visualization features through a single portal. Here, we present ePlant (http://bar.utoronto.ca/eplant), a visual analytic tool for exploring multiple levels of Arabidopsis thaliana data through a zoomable user interface. ePlant connects to several publicly available web services to download genome, proteome, interactome, transcriptome, and 3D molecular structure data for one or more genes or gene products of interest. Data are displayed with a set of visualization tools that are presented using a conceptual hierarchy from big to small, and many of the tools combine information from more than one data type. We describe the development of ePlant in this article and present several examples illustrating its integrative features for hypothesis generation. We also describe the process of deploying ePlant as an "app" on Araport. Building on readily available web services, the code for ePlant is freely available for any other biological species research. © 2017 American Society of Plant Biologists. All rights reserved.

  12. Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis

    PubMed Central

    Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.

    2014-01-01

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300

  13. Visualization of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.

  14. Neural Substrates of Visual Spatial Coding and Visual Feedback Control for Hand Movements in Allocentric and Target-Directed Tasks

    PubMed Central

    Thaler, Lore; Goodale, Melvyn A.

    2011-01-01

    Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of movements. PMID:21941474

  15. Remembering Plurals: Unit of Coding and Form of Coding during Serial Recall.

    ERIC Educational Resources Information Center

    Van Der Molen, Hugo; Morton, John

    1979-01-01

    Adult females recalled lists of six words, including some plural nouns, presented visually in sequence. A frequent error was to detach the plural from its root. This supports a morpheme-based as opposed to a unitary word code. Evidence for a primarily phonological coding of the plural morpheme was obtained. (Author/RD)

  16. Orthographic Coding: Brain Activation for Letters, Symbols, and Digits.

    PubMed

    Carreiras, Manuel; Quiñones, Ileana; Hernández-Cabrera, Juan Andrés; Duñabeitia, Jon Andoni

    2015-12-01

    The present experiment investigates the input coding mechanisms of 3 common printed characters: letters, numbers, and symbols. Despite research in this area, it is yet unclear whether the identity of these 3 elements is processed through the same or different brain pathways. In addition, some computational models propose that the position-in-string coding of these elements responds to general flexible mechanisms of the visual system that are not character-specific, whereas others suggest that the position coding of letters responds to specific processes that are different from those that guide the position-in-string assignment of other types of visual objects. Here, in an fMRI study, we manipulated character position and character identity through the transposition or substitution of 2 internal elements within strings of 4 elements. Participants were presented with 2 consecutive visual strings and asked to decide whether they were the same or different. The results showed: 1) that some brain areas responded more to letters than to numbers and vice versa, suggesting that processing may follow different brain pathways; 2) that the left parietal cortex is involved in letter identity, and critically in letter position coding, specifically contributing to the early stages of the reading process; and that 3) a stimulus-specific mechanism for letter position coding is operating during orthographic processing. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. A distributed code for color in natural scenes derived from center-surround filtered cone signals

    PubMed Central

    Kellner, Christian J.; Wachtler, Thomas

    2013-01-01

    In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN) and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for color. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in color space. Here we investigated how the non-linear spatio-chromatic filtering in the retina influences the encoding of color signals. Cone signals were derived from hyper-spectral images of natural scenes and preprocessed by center-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis (ICA) on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in color space, similar to the population code of color in the early visual cortex. Our results indicate that spatio-chromatic processing in the retina leads to a more distributed and more efficient code for natural scenes. PMID:24098289

  18. Simultaneous perception of a spoken and a signed language: The brain basis of ASL-English code-blends

    PubMed Central

    Weisberg, Jill; McCullough, Stephen; Emmorey, Karen

    2018-01-01

    Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration. PMID:26177161

  19. Letter position coding across modalities: the case of Braille readers.

    PubMed

    Perea, Manuel; García-Chamorro, Cristina; Martín-Suesta, Miguel; Gómez, Pablo

    2012-01-01

    The question of how the brain encodes letter position in written words has attracted increasing attention in recent years. A number of models have recently been proposed to accommodate the fact that transposed-letter stimuli like jugde or caniso are perceptually very close to their base words. Here we examined how letter position coding is attained in the tactile modality via Braille reading. The idea is that Braille word recognition may provide more serial processing than the visual modality, and this may produce differences in the input coding schemes employed to encode letters in written words. To that end, we conducted a lexical decision experiment with adult Braille readers in which the pseudowords were created by transposing/replacing two letters. We found a word-frequency effect for words. In addition, unlike parallel experiments in the visual modality, we failed to find any clear signs of transposed-letter confusability effects. This dissociation highlights the differences between modalities. The present data argue against models of letter position coding that assume that transposed-letter effects (in the visual modality) occur at a relatively late, abstract locus.

  20. Schroedinger’s Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley

    2018-05-01

    We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.

  1. Automated Concurrent Blackboard System Generation in C++

    NASA Technical Reports Server (NTRS)

    Kaplan, J. A.; McManus, J. W.; Bynum, W. L.

    1999-01-01

    In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.

  2. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  3. Reactome diagram viewer: data structures and strategies to boost performance.

    PubMed

    Fabregat, Antonio; Sidiropoulos, Konstantinos; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning

    2018-04-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. For web-based pathway visualization, Reactome uses a custom pathway diagram viewer that has been evolved over the past years. Here, we present comprehensive enhancements in usability and performance based on extensive usability testing sessions and technology developments, aiming to optimize the viewer towards the needs of the community. The pathway diagram viewer version 3 achieves consistently better performance, loading and rendering of 97% of the diagrams in Reactome in less than 1 s. Combining the multi-layer html5 canvas strategy with a space partitioning data structure minimizes CPU workload, enabling the introduction of new features that further enhance user experience. Through the use of highly optimized data structures and algorithms, Reactome has boosted the performance and usability of the new pathway diagram viewer, providing a robust, scalable and easy-to-integrate solution to pathway visualization. As graph-based visualization of complex data is a frequent challenge in bioinformatics, many of the individual strategies presented here are applicable to a wide range of web-based bioinformatics resources. Reactome is available online at: https://reactome.org. The diagram viewer is part of the Reactome pathway browser (https://reactome.org/PathwayBrowser/) and also available as a stand-alone widget at: https://reactome.org/dev/diagram/. The source code is freely available at: https://github.com/reactome-pwp/diagram. fabregat@ebi.ac.uk or hhe@ebi.ac.uk. Supplementary data are available at Bioinformatics online.

  4. Alcohol Messages in Prime-Time Television Series

    PubMed Central

    RUSSELL, CRISTEL ANTONIA; RUSSELL, DALE W.

    2010-01-01

    Alcohol messages contained in television programming serve as sources of information about drinking. To better understand the ways embedded messages about alcohol are communicated, it is crucial to objectively monitor and analyze television alcohol depictions. This article presents a content analysis of an eight-week sample of eighteen prime-time programs. Alcohol messages were coded based on modalities of presentation, level of plot connection, and valence. The analysis reveals that mixed messages about alcohol often coexist but the ways in which they are presented differ: whereas negative messages are tied to the plot and communicated verbally, positive messages are associated with subtle visual portrayals. PMID:21188281

  5. Gobe: an interactive, web-based tool for comparative genomic visualization.

    PubMed

    Pedersen, Brent S; Tang, Haibao; Freeling, Michael

    2011-04-01

    Gobe is a web-based tool for viewing comparative genomic data. It supports viewing multiple genomic regions simultaneously. Its simple text format and flash-based rendering make it an interactive, exploratory research tool. Gobe can be used without installation through our web service, or downloaded and customized with stylesheets and javascript callback functions. Gobe is a flash application that runs in all modern web-browsers. The full source-code, including that for the online web application is available under the MIT license at: http://github.com/brentp/gobe. Sample applications are hosted at http://try-gobe.appspot.com/ and http://synteny.cnr.berkeley.edu/gobe-app/.

  6. Thermo-msf-parser: an open source Java library to parse and visualize Thermo Proteome Discoverer msf files.

    PubMed

    Colaert, Niklaas; Barsnes, Harald; Vaudel, Marc; Helsens, Kenny; Timmerman, Evy; Sickmann, Albert; Gevaert, Kris; Martens, Lennart

    2011-08-05

    The Thermo Proteome Discoverer program integrates both peptide identification and quantification into a single workflow for peptide-centric proteomics. Furthermore, its close integration with Thermo mass spectrometers has made it increasingly popular in the field. Here, we present a Java library to parse the msf files that constitute the output of Proteome Discoverer. The parser is also implemented as a graphical user interface allowing convenient access to the information found in the msf files, and in Rover, a program to analyze and validate quantitative proteomics information. All code, binaries, and documentation is freely available at http://thermo-msf-parser.googlecode.com.

  7. MetaABC--an integrated metagenomics platform for data adjustment, binning and clustering.

    PubMed

    Su, Chien-Hao; Hsu, Ming-Tsung; Wang, Tse-Yi; Chiang, Sufeng; Cheng, Jen-Hao; Weng, Francis C; Kao, Cheng-Yan; Wang, Daryi; Tsai, Huai-Kuang

    2011-08-15

    MetaABC is a metagenomic platform that integrates several binning tools coupled with methods for removing artifacts, analyzing unassigned reads and controlling sampling biases. It allows users to arrive at a better interpretation via series of distinct combinations of analysis tools. After execution, MetaABC provides outputs in various visual formats such as tables, pie and bar charts as well as clustering result diagrams. MetaABC source code and documentation are available at http://bits2.iis.sinica.edu.tw/MetaABC/ CONTACT: dywang@gate.sinica.edu.tw; hktsai@iis.sinica.edu.tw Supplementary data are available at Bioinformatics online.

  8. Integer sequence discovery from small graphs

    PubMed Central

    Hoppe, Travis; Petrone, Anna

    2015-01-01

    We have exhaustively enumerated all simple, connected graphs of a finite order and have computed a selection of invariants over this set. Integer sequences were constructed from these invariants and checked against the Online Encyclopedia of Integer Sequences (OEIS). 141 new sequences were added and six sequences were extended. From the graph database, we were able to programmatically suggest relationships among the invariants. It will be shown that we can readily visualize any sequence of graphs with a given criteria. The code has been released as an open-source framework for further analysis and the database was constructed to be extensible to invariants not considered in this work. PMID:27034526

  9. clusterProfiler: an R package for comparing biological themes among gene clusters.

    PubMed

    Yu, Guangchuang; Wang, Li-Gen; Han, Yanyan; He, Qing-Yu

    2012-05-01

    Increasing quantitative data generated from transcriptomics and proteomics require integrative strategies for analysis. Here, we present an R package, clusterProfiler that automates the process of biological-term classification and the enrichment analysis of gene clusters. The analysis module and visualization module were combined into a reusable workflow. Currently, clusterProfiler supports three species, including humans, mice, and yeast. Methods provided in this package can be easily extended to other species and ontologies. The clusterProfiler package is released under Artistic-2.0 License within Bioconductor project. The source code and vignette are freely available at http://bioconductor.org/packages/release/bioc/html/clusterProfiler.html.

  10. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  11. Stimulus information contaminates summation tests of independent neural representations of features

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2002-01-01

    Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.

  12. FUn: a framework for interactive visualizations of large, high-dimensional datasets on the web.

    PubMed

    Probst, Daniel; Reymond, Jean-Louis

    2018-04-15

    During the past decade, big data have become a major tool in scientific endeavors. Although statistical methods and algorithms are well-suited for analyzing and summarizing enormous amounts of data, the results do not allow for a visual inspection of the entire data. Current scientific software, including R packages and Python libraries such as ggplot2, matplotlib and plot.ly, do not support interactive visualizations of datasets exceeding 100 000 data points on the web. Other solutions enable the web-based visualization of big data only through data reduction or statistical representations. However, recent hardware developments, especially advancements in graphical processing units, allow for the rendering of millions of data points on a wide range of consumer hardware such as laptops, tablets and mobile phones. Similar to the challenges and opportunities brought to virtually every scientific field by big data, both the visualization of and interaction with copious amounts of data are both demanding and hold great promise. Here we present FUn, a framework consisting of a client (Faerun) and server (Underdark) module, facilitating the creation of web-based, interactive 3D visualizations of large datasets, enabling record level visual inspection. We also introduce a reference implementation providing access to SureChEMBL, a database containing patent information on more than 17 million chemical compounds. The source code and the most recent builds of Faerun and Underdark, Lore.js and the data preprocessing toolchain used in the reference implementation, are available on the project website (http://doc.gdb.tools/fun/). daniel.probst@dcb.unibe.ch or jean-louis.reymond@dcb.unibe.ch.

  13. Epiviz: a view inside the design of an integrated visual analysis software for genomics

    PubMed Central

    2015-01-01

    Background Computational and visual data analysis for genomics has traditionally involved a combination of tools and resources, of which the most ubiquitous consist of genome browsers, focused mainly on integrative visualization of large numbers of big datasets, and computational environments, focused on data modeling of a small number of moderately sized datasets. Workflows that involve the integration and exploration of multiple heterogeneous data sources, small and large, public and user specific have been poorly addressed by these tools. In our previous work, we introduced Epiviz, which bridges the gap between the two types of tools, simplifying these workflows. Results In this paper we expand on the design decisions behind Epiviz, and introduce a series of new advanced features that further support the type of interactive exploratory workflow we have targeted. We discuss three ways in which Epiviz advances the field of genomic data analysis: 1) it brings code to interactive visualizations at various different levels; 2) takes the first steps in the direction of collaborative data analysis by incorporating user plugins from source control providers, as well as by allowing analysis states to be shared among the scientific community; 3) combines established analysis features that have never before been available simultaneously in a genome browser. In our discussion section, we present security implications of the current design, as well as a series of limitations and future research steps. Conclusions Since many of the design choices of Epiviz are novel in genomics data analysis, this paper serves both as a document of our own approaches with lessons learned, as well as a start point for future efforts in the same direction for the genomics community. PMID:26328750

  14. You've Written a Cool Astronomy Code! Now What Do You Do with It?

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Accomazzi, A.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P. J.; Wallin, J. F.

    2014-01-01

    Now that you've written a useful astronomy code for your soon-to-be-published research, you have to figure out what you want to do with it. Our suggestion? Share it! This presentation highlights the means and benefits of sharing your code. Make your code citable -- submit it to the Astrophysics Source Code Library and have it indexed by ADS! The Astrophysics Source Code Library (ASCL) is a free online registry of source codes of interest to astronomers and astrophysicists. With over 700 codes, it is continuing its rapid growth, with an average of 17 new codes a month. The editors seek out codes for inclusion; indexing by ADS improves the discoverability of codes and provides a way to cite codes as separate entries, especially codes without papers that describe them.

  15. Color-coded duplex sonography for diagnosis of testicular torsion.

    PubMed

    Zoeller, G; Ringert, R H

    1991-11-01

    By color-coded duplex sonography moving structures are visualized as red or blue colors within a normal gray-scale B-mode ultrasound image. Thus, blood flow even within small vessels can be visualized clearly. Color-coded duplex sonographic examination was performed in 11 patients who presented with scrotal pain. This method proved to be reliable to differentiate between testicular torsion and testicular inflammation. By clearly demonstrating a lack of intratesticular blood flow in testicular torsion, while avoiding flow in scrotal skin vessels being misinterpreted as intratesticular blood flow, this method significantly decreases the number of patients in whom surgical evaluation is necessary to exclude testicular torsion.

  16. Visual Computing Environment Workshop

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles (Compiler)

    1998-01-01

    The Visual Computing Environment (VCE) is a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis.

  17. Pre-calculated protein structure alignments at the RCSB PDB website.

    PubMed

    Prlic, Andreas; Bliven, Spencer; Rose, Peter W; Bluhm, Wolfgang F; Bizon, Chris; Godzik, Adam; Bourne, Philip E

    2010-12-01

    With the continuous growth of the RCSB Protein Data Bank (PDB), providing an up-to-date systematic structure comparison of all protein structures poses an ever growing challenge. Here, we present a comparison tool for calculating both 1D protein sequence and 3D protein structure alignments. This tool supports various applications at the RCSB PDB website. First, a structure alignment web service calculates pairwise alignments. Second, a stand-alone application runs alignments locally and visualizes the results. Third, pre-calculated 3D structure comparisons for the whole PDB are provided and updated on a weekly basis. These three applications allow users to discover novel relationships between proteins available either at the RCSB PDB or provided by the user. A web user interface is available at http://www.rcsb.org/pdb/workbench/workbench.do. The source code is available under the LGPL license from http://www.biojava.org. A source bundle, prepared for local execution, is available from http://source.rcsb.org andreas@sdsc.edu; pbourne@ucsd.edu.

  18. DATA-MEAns: an open source tool for the classification and management of neural ensemble recordings.

    PubMed

    Bonomini, María P; Ferrandez, José M; Bolea, Jose Angel; Fernandez, Eduardo

    2005-10-30

    The number of laboratories using techniques that allow to acquire simultaneous recordings of as many units as possible is considerably increasing. However, the development of tools used to analyse this multi-neuronal activity is generally lagging behind the development of the tools used to acquire these data. Moreover, the data exchange between research groups using different multielectrode acquisition systems is hindered by commercial constraints such as exclusive file structures, high priced licenses and hard policies on intellectual rights. This paper presents a free open-source software for the classification and management of neural ensemble data. The main goal is to provide a graphical user interface that links the experimental data to a basic set of routines for analysis, visualization and classification in a consistent framework. To facilitate the adaptation and extension as well as the addition of new routines, tools and algorithms for data analysis, the source code and documentation are freely available.

  19. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  20. GIS-MODFLOW: Ein kleines OpenSource-Werkzeug zur Anbindung von GIS-Daten an MODFLOW

    NASA Astrophysics Data System (ADS)

    Gossel, Wolfgang

    2013-06-01

    The numerical model MODFLOW (Harbaugh 2005) is an efficient and up-to-date tool for groundwater flow modelling. On the other hand, Geo-Information-Systems (GIS) provide useful tools for data preparation and visualization that can also be incorporated in numerical groundwater modelling. An interface between both would therefore be useful for many hydrogeological investigations. To date, several integrated stand-alone tools have been developed that rely on MODFLOW, MODPATH and transport modelling tools. Simultaneously, several open source-GIS codes were developed to improve functionality and ease of use. These GIS tools can be used as pre- and post-processors of the numerical model MODFLOW via a suitable interface. Here we present GIS-MODFLOW as an open-source tool that provides a new universal interface by using the ESRI ASCII GRID data format that can be converted into MODFLOW input data. This tool can also treat MODFLOW results. Such a combination of MODFLOW and open-source GIS opens new possibilities to render groundwater flow modelling, and simulation results, available to larger circles of hydrogeologists.

  1. Location-coding account versus affordance-activation account in handle-to-hand correspondence effects: Evidence of Simon-like effects based on the coding of action direction.

    PubMed

    Pellicano, Antonello; Koch, Iring; Binkofski, Ferdinand

    2017-09-01

    An increasing number of studies have shown a close link between perception and action, which is supposed to be responsible for the automatic activation of actions compatible with objects' properties, such as the orientation of their graspable parts. It has been observed that left and right hand responses to objects (e.g., cups) are faster and more accurate if the handle orientation corresponds to the response location than when it does not. Two alternative explanations have been proposed for this handle-to-hand correspondence effect : location coding and affordance activation. The aim of the present study was to provide disambiguating evidence on the origin of this effect by employing object sets for which the visually salient portion was separated from, and opposite to the graspable 1, and vice versa. Seven experiments were conducted employing both single objects and object pairs as visual stimuli to enhance the contextual information about objects' graspability and usability. Notwithstanding these manipulations intended to favor affordance activation, results fully supported the location-coding account displaying significant Simon-like effects that involved the orientation of the visually salient portion of the object stimulus and the location of the response. Crucially, we provided evidence of Simon-like effects based on higher-level cognitive, iconic representations of action directions rather than based on lower-level spatial coding of the pure position of protruding portions of the visual stimuli. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Pyramid image codes

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  3. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    PubMed Central

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739

  4. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    PubMed

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  5. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  6. Letter position coding across modalities: braille and sighted reading of sentences with jumbled words.

    PubMed

    Perea, Manuel; Jiménez, María; Martín-Suesta, Miguel; Gómez, Pablo

    2015-04-01

    This article explores how letter position coding is attained during braille reading and its implications for models of word recognition. When text is presented visually, the reading process easily adjusts to the jumbling of some letters (jugde-judge), with a small cost in reading speed. Two explanations have been proposed: One relies on a general mechanism of perceptual uncertainty at the visual level, and the other focuses on the activation of an abstract level of representation (i.e., bigrams) that is shared by all orthographic codes. Thus, these explanations make differential predictions about reading in a tactile modality. In the present study, congenitally blind readers read sentences presented on a braille display that tracked the finger position. The sentences either were intact or involved letter transpositions. A parallel experiment was conducted in the visual modality. Results revealed a substantially greater reading cost for the sentences with transposed-letter words in braille readers. In contrast with the findings with sighted readers, in which there is a cost of transpositions in the external (initial and final) letters, the reading cost in braille readers occurs serially, with a large cost for initial letter transpositions. Thus, these data suggest that the letter-position-related effects in visual word recognition are due to the characteristics of the visual stream.

  7. Publicly Releasing a Large Simulation Dataset with NDS Labs

    NASA Astrophysics Data System (ADS)

    Goldbaum, Nathan

    2016-03-01

    Optimally, all publicly funded research should be accompanied by the tools, code, and data necessary to fully reproduce the analysis performed in journal articles describing the research. This ideal can be difficult to attain, particularly when dealing with large (>10 TB) simulation datasets. In this lightning talk, we describe the process of publicly releasing a large simulation dataset to accompany the submission of a journal article. The simulation was performed using Enzo, an open source, community-developed N-body/hydrodynamics code and was analyzed using a wide range of community- developed tools in the scientific Python ecosystem. Although the simulation was performed and analyzed using an ecosystem of sustainably developed tools, we enable sustainable science using our data by making it publicly available. Combining the data release with the NDS Labs infrastructure allows a substantial amount of added value, including web-based access to analysis and visualization using the yt analysis package through an IPython notebook interface. In addition, we are able to accompany the paper submission to the arXiv preprint server with links to the raw simulation data as well as interactive real-time data visualizations that readers can explore on their own or share with colleagues during journal club discussions. It is our hope that the value added by these services will substantially increase the impact and readership of the paper.

  8. Real simulation tools in introductory courses: packaging and repurposing our research code.

    NASA Astrophysics Data System (ADS)

    Heagy, L. J.; Cockett, R.; Kang, S.; Oldenburg, D.

    2015-12-01

    Numerical simulations are an important tool for scientific research and applications in industry. They provide a means to experiment with physics in a tangible, visual way, often providing insights into the problem. Over the last two years, we have been developing course and laboratory materials for an undergraduate geophysics course primarily taken by non-geophysics majors, including engineers and geologists. Our aim is to provide the students with resources to build intuition about geophysical techniques, promote curiosity driven exploration, and help them develop the skills necessary to communicate across disciplines. Using open-source resources and our existing research code, we have built modules around simulations, with supporting content to give student interactive tools for exploration into the impacts of input parameters and visualization of the resulting fields, fluxes and data for a variety of problems in applied geophysics, including magnetics, seismic, electromagnetics, and direct current resistivity. The content provides context for the problems, along with exercises that are aimed at getting students to experiment and ask 'what if...?' questions. In this presentation, we will discuss our approach for designing the structure of the simulation-based modules, the resources we have used, challenges we have encountered, general feedback from students and instructors, as well as our goals and roadmap for future improvement. We hope that our experiences and approach will be beneficial to other instructors who aim to put simulation tools in the hands of students.

  9. User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E)

    DTIC Science & Technology

    2014-06-01

    User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E) by James P. Larentzos...Laboratory Aberdeen Proving Ground, MD 21005-5069 ARL-SR-290 June 2014 User Manual and Source Code for a LAMMPS Implementation of Constant...3. DATES COVERED (From - To) September 2013–February 2014 4. TITLE AND SUBTITLE User Manual and Source Code for a LAMMPS Implementation of

  10. ObsPy: Establishing and maintaining an open-source community package

    NASA Astrophysics Data System (ADS)

    Krischer, L.; Megies, T.; Barsch, R.

    2017-12-01

    Python's ecosystem evolved into one of the most powerful and productive research environment across disciplines. ObsPy (https://obspy.org) is a fully community driven, open-source project dedicated to provide a bridge for seismology into that ecosystem. It does so by offering Read and write support for essentially every commonly used data format in seismology, Integrated access to the largest data centers, web services, and real-time data streams, A powerful signal processing toolbox tuned to the specific needs of seismologists, and Utility functionality like travel time calculations, geodetic functions, and data visualizations. ObsPy has been in constant unfunded development for more than eight years and is developed and used by scientists around the world with successful applications in all branches of seismology. By now around 70 people directly contributed code to ObsPy and we aim to make it a self-sustaining community project.This contributions focusses on several meta aspects of open-source software in science, in particular how we experienced them. During the panel we would like to discuss obvious questions like long-term sustainability with very limited to no funding, insufficient computer science training in many sciences, and gaining hard scientific credits for software development, but also the following questions: How to best deal with the fact that a lot of scientific software is very specialized thus usually solves a complex problem but at the same time can only ever reach a limited pool of developers and users by virtue of it being so specialized? Therefore the "many eyes on the code" approach to develop and improve open-source software only applies in a limited fashion. An initial publication for a significant new scientific software package is fairly straightforward. How to on-board and motivate potential new contributors when they can no longer be lured by a potential co-authorship? When is spending significant time and effort on reusable scientific open-source development a reasonable choice for young researchers? The effort to go from purpose tailored code for a single application resulting in a scientific publication is significantly less compared to generalising and engineering it well enough so it can be used by others.

  11. Astronomy education and the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Nemiroff, Robert J.

    2016-01-01

    The Astrophysics Source Code Library (ASCL) is an online registry of source codes used in refereed astrophysics research. It currently lists nearly 1,200 codes and covers all aspects of computational astrophysics. How can this resource be of use to educators and to the graduate students they mentor? The ASCL serves as a discovery tool for codes that can be used for one's own research. Graduate students can also investigate existing codes to see how common astronomical problems are approached numerically in practice, and use these codes as benchmarks for their own solutions to these problems. Further, they can deepen their knowledge of software practices and techniques through examination of others' codes.

  12. QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks

    PubMed Central

    Thibodeau, Asa; Márquez, Eladio J.; Luo, Oscar; Ruan, Yijun; Shin, Dong-Guk; Stitzel, Michael L.; Ucar, Duygu

    2016-01-01

    Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN’s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/. PMID:27336171

  13. Perceptual learning through optimization of attentional weighting: human versus optimal Bayesian learner

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Abbey, Craig K.; Pham, Binh T.; Shimozaki, Steven S.

    2004-01-01

    Human performance in visual detection, discrimination, identification, and search tasks typically improves with practice. Psychophysical studies suggest that perceptual learning is mediated by an enhancement in the coding of the signal, and physiological studies suggest that it might be related to the plasticity in the weighting or selection of sensory units coding task relevant information (learning through attention optimization). We propose an experimental paradigm (optimal perceptual learning paradigm) to systematically study the dynamics of perceptual learning in humans by allowing comparisons to that of an optimal Bayesian algorithm and a number of suboptimal learning models. We measured improvement in human localization (eight-alternative forced-choice with feedback) performance of a target randomly sampled from four elongated Gaussian targets with different orientations and polarities and kept as a target for a block of four trials. The results suggest that the human perceptual learning can occur within a lapse of four trials (<1 min) but that human learning is slower and incomplete with respect to the optimal algorithm (23.3% reduction in human efficiency from the 1st-to-4th learning trials). The greatest improvement in human performance, occurring from the 1st-to-2nd learning trial, was also present in the optimal observer, and, thus reflects a property inherent to the visual task and not a property particular to the human perceptual learning mechanism. One notable source of human inefficiency is that, unlike the ideal observer, human learning relies more heavily on previous decisions than on the provided feedback, resulting in no human learning on trials following a previous incorrect localization decision. Finally, the proposed theory and paradigm provide a flexible framework for future studies to evaluate the optimality of human learning of other visual cues and/or sensory modalities.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehe, Remi

    Many simulation software produce data in the form of a set of field values or of a set of particle positions. (one such example is that of particle-in-cell codes, which produce data on the electromagnetic fields that they simulate.) However, each particular software uses its own particular format and layout, for the output data. This makes it difficult to compare the results of different simulation software, or to have a common visualization tool for these results. However, a standardized layout for fields and particles has recently been developed: the openPMD format ( HYPERLINK "http://www.openpmd.org/"www.openpmd.org) This format is open- source, andmore » specifies a standard way in which field data and particle data should be written. The openPMD format is already implemented in the particle-in-cell code Warp (developed at LBL) and in PIConGPU (developed at HZDR, Germany). In this context, the proposed software (openPMD-viewer) is a Python package, which allows to access and visualize any data which has been formatted according to the openPMD standard. This package contains two main components: - a Python API, which allows to read and extract the data from a openPMD file, so as to be able to work with it within the Python environment. (e.g. plot the data and reprocess it with particular Python functions) - a graphical interface, which works with the ipython notebook, and allows to quickly visualize the data and browse through a set of openPMD files. The proposed software will be typically used when analyzing the results of numerical simulations. It will be useful to quickly extract scientific meaning from a set of numerical data.« less

  15. It takes two-coincidence coding within the dual olfactory pathway of the honeybee.

    PubMed

    Brill, Martin F; Meyer, Anneke; Rössler, Wolfgang

    2015-01-01

    To rapidly process biologically relevant stimuli, sensory systems have developed a broad variety of coding mechanisms like parallel processing and coincidence detection. Parallel processing (e.g., in the visual system), increases both computational capacity and processing speed by simultaneously coding different aspects of the same stimulus. Coincidence detection is an efficient way to integrate information from different sources. Coincidence has been shown to promote associative learning and memory or stimulus feature detection (e.g., in auditory delay lines). Within the dual olfactory pathway of the honeybee both of these mechanisms might be implemented by uniglomerular projection neurons (PNs) that transfer information from the primary olfactory centers, the antennal lobe (AL), to a multimodal integration center, the mushroom body (MB). PNs from anatomically distinct tracts respond to the same stimulus space, but have different physiological properties, characteristics that are prerequisites for parallel processing of different stimulus aspects. However, the PN pathways also display mirror-imaged like anatomical trajectories that resemble neuronal coincidence detectors as known from auditory delay lines. To investigate temporal processing of olfactory information, we recorded PN odor responses simultaneously from both tracts and measured coincident activity of PNs within and between tracts. Our results show that coincidence levels are different within each of the two tracts. Coincidence also occurs between tracts, but to a minor extent compared to coincidence within tracts. Taken together our findings support the relevance of spike timing in coding of olfactory information (temporal code).

  16. Fostering Team Awareness in Earth System Modeling Communities

    NASA Astrophysics Data System (ADS)

    Easterbrook, S. M.; Lawson, A.; Strong, S.

    2009-12-01

    Existing Global Climate Models are typically managed and controlled at a single site, with varied levels of participation by scientists outside the core lab. As these models evolve to encompass a wider set of earth systems, this central control of the modeling effort becomes a bottleneck. But such models cannot evolve to become fully distributed open source projects unless they address the imbalance in the availability of communication channels: scientists at the core site have access to regular face-to-face communication with one another, while those at remote sites have access to only a subset of these conversations - e.g. formally scheduled teleconferences and user meetings. Because of this imbalance, critical decision making can be hidden from many participants, their code contributions can interact in unanticipated ways, and the community loses awareness of who knows what. We have documented some of these problems in a field study at one climate modeling centre, and started to develop tools to overcome these problems. We report on one such tool, TracSNAP, which analyzes the social network of the scientists contributing code to the model by extracting the data in an existing project code repository. The tool presents the results of this analysis to modelers and model users in a number of ways: recommendation for who has expertise on particular code modules, suggestions for code sections that are related to files being worked on, and visualizations of team communication patterns. The tool is currently available as a plugin for the Trac bug tracking system.

  17. A review of uncertainty visualization within the IPCC reports

    NASA Astrophysics Data System (ADS)

    Nocke, Thomas; Reusser, Dominik; Wrobel, Markus

    2015-04-01

    Results derived from climate model simulations confront non-expert users with a variety of uncertainties. This gives rise to the challenge that the scientific information must be communicated such that it can be easily understood, however, the complexity of the science behind is still incorporated. With respect to the assessment reports of the IPCC, the situation is even more complicated, because heterogeneous sources and multiple types of uncertainties need to be compiled together. Within this work, we systematically (1) analyzed the visual representation of uncertainties in the IPCC AR4 and AR5 reports, and (2) executed a questionnaire to evaluate how different user groups such as decision-makers and teachers understand these uncertainty visualizations. Within the first step, we classified visual uncertainty metaphors for spatial, temporal and abstract representations. As a result, we clearly identified a high complexity of the IPCC visualizations compared to standard presentation graphics, sometimes even integrating two or more uncertainty classes / measures together with the "certain" (mean) information. Further we identified complex written uncertainty explanations within image captions even within the "summary reports for policy makers". In the second step, based on these observations, we designed a questionnaire to investigate how non-climate experts understand these visual representations of uncertainties, how visual uncertainty coding might hinder the perception of the "non-uncertain" data, and if alternatives for certain IPCC visualizations exist. Within the talk/poster, we will present first results from this questionnaire. Summarizing, we identified a clear trend towards complex images within the latest IPCC reports, with a tendency to incorporate as much as possible information into the visual representations, resulting in proprietary, non-standard graphic representations that are not necessarily easy to comprehend on one glimpse. We conclude that further translation is required to (visually) present the IPCC results to non-experts, providing tailored static and interactive visualization solutions for different user groups.

  18. Automated UMLS-Based Comparison of Medical Forms

    PubMed Central

    Dugas, Martin; Fritz, Fleur; Krumm, Rainer; Breil, Bernhard

    2013-01-01

    Medical forms are very heterogeneous: on a European scale there are thousands of data items in several hundred different systems. To enable data exchange for clinical care and research purposes there is a need to develop interoperable documentation systems with harmonized forms for data capture. A prerequisite in this harmonization process is comparison of forms. So far – to our knowledge – an automated method for comparison of medical forms is not available. A form contains a list of data items with corresponding medical concepts. An automatic comparison needs data types, item names and especially item with these unique concept codes from medical terminologies. The scope of the proposed method is a comparison of these items by comparing their concept codes (coded in UMLS). Each data item is represented by item name, concept code and value domain. Two items are called identical, if item name, concept code and value domain are the same. Two items are called matching, if only concept code and value domain are the same. Two items are called similar, if their concept codes are the same, but the value domains are different. Based on these definitions an open-source implementation for automated comparison of medical forms in ODM format with UMLS-based semantic annotations was developed. It is available as package compareODM from http://cran.r-project.org. To evaluate this method, it was applied to a set of 7 real medical forms with 285 data items from a large public ODM repository with forms for different medical purposes (research, quality management, routine care). Comparison results were visualized with grid images and dendrograms. Automated comparison of semantically annotated medical forms is feasible. Dendrograms allow a view on clustered similar forms. The approach is scalable for a large set of real medical forms. PMID:23861827

  19. Visual Neuroscience: Unique Neural System for Flight Stabilization in Hummingbirds.

    PubMed

    Ibbotson, M R

    2017-01-23

    The pretectal visual motion processing area in the hummingbird brain is unlike that in other birds: instead of emphasizing detection of horizontal movements, it codes for motion in all directions through 360°, possibly offering precise visual stability control during hovering. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Assessment of visual communication by information theory

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.

    1994-01-01

    This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

  1. Sandia Engineering Analysis Code Access System v. 2.0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sjaardema, Gregory D.

    The Sandia Engineering Analysis Code Access System (SEACAS) is a suite of preprocessing, post processing, translation, visualization, and utility applications supporting finite element analysis software using the Exodus database file format.

  2. Rank Order Coding: a Retinal Information Decoding Strategy Revealed by Large-Scale Multielectrode Array Retinal Recordings.

    PubMed

    Portelli, Geoffrey; Barrett, John M; Hilgen, Gerrit; Masquelier, Timothée; Maccione, Alessandro; Di Marco, Stefano; Berdondini, Luca; Kornprobst, Pierre; Sernagor, Evelyne

    2016-01-01

    How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high-density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes is an accurate indicator of stimulus content. We show that this coding strategy coexists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes.

  3. Detection of counterfeit electronic components through ambient mass spectrometry and chemometrics.

    PubMed

    Pfeuffer, Kevin P; Caldwell, Jack; Shelley, Jake T; Ray, Steven J; Hieftje, Gary M

    2014-09-21

    In the last several years, illicit electronic components have been discovered in the inventories of several distributors and even installed in commercial and military products. Illicit or counterfeit electronic components include a broad category of devices that can range from the correct unit with a more recent date code to lower-specification or non-working systems with altered names, manufacturers and date codes. Current methodologies for identification of counterfeit electronics rely on visual microscopy by expert users and, while effective, are very time-consuming. Here, a plasma-based ambient desorption/ionization source, the flowing atmospheric pressure afterglow (FAPA) is used to generate a mass-spectral fingerprint from the surface of a variety of discrete electronic integrated circuits (ICs). Chemometric methods, specifically principal component analysis (PCA) and the bootstrapped error-adjusted single-sample technique (BEAST), are used successfully to differentiate between genuine and counterfeit ICs. In addition, chemical and physical surface-removal techniques are explored and suggest which surface-altering techniques were utilized by counterfeiters.

  4. APORT: a program for the area-based apportionment of county variables to cells of a polar grid. [Airborne pollutant transport models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fields, D.E.; Little, C.A.

    1978-11-01

    The APORT computer code was developed to apportion variables tabulated for polygon-structured civil districts onto cells of a polar grid. The apportionment is based on fractional overlap between the polygon and the grid cells. Centering the origin of the polar system at a pollutant source site yields results that are very useful for assessing and interpreting the effects of airborne pollutant dissemination. The APOPLT graphics code, which uses the same data set as APORT, provides a convenient visual display of the polygon structure and the extent of the polar grid. The APORT/APOPLT methodology was verified by application to county summariesmore » of cattle population for counties surrounding the Oyster Creek, New Jersey, nuclear power plant. These numerical results, which were obtained using approximately 2-min computer time on an IBM System 360/91 computer, compare favorably to results of manual computations in both speed and accuracy.« less

  5. Juicebox.js Provides a Cloud-Based Visualization System for Hi-C Data.

    PubMed

    Robinson, James T; Turner, Douglass; Durand, Neva C; Thorvaldsdóttir, Helga; Mesirov, Jill P; Aiden, Erez Lieberman

    2018-02-28

    Contact mapping experiments such as Hi-C explore how genomes fold in 3D. Here, we introduce Juicebox.js, a cloud-based web application for exploring the resulting datasets. Like the original Juicebox application, Juicebox.js allows users to zoom in and out of such datasets using an interface similar to Google Earth. Juicebox.js also has many features designed to facilitate data reproducibility and sharing. Furthermore, Juicebox.js encodes the exact state of the browser in a shareable URL. Creating a public browser for a new Hi-C dataset does not require coding and can be accomplished in under a minute. The web app also makes it possible to create interactive figures online that can complement or replace ordinary journal figures. When combined with Juicer, this makes the entire process of data analysis transparent, insofar as every step from raw reads to published figure is publicly available as open source code. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Software support for SBGN maps: SBGN-ML and LibSBGN.

    PubMed

    van Iersel, Martijn P; Villéger, Alice C; Czauderna, Tobias; Boyd, Sarah E; Bergmann, Frank T; Luna, Augustin; Demir, Emek; Sorokin, Anatoly; Dogrusoz, Ugur; Matsuoka, Yukiko; Funahashi, Akira; Aladjem, Mirit I; Mi, Huaiyu; Moodie, Stuart L; Kitano, Hiroaki; Le Novère, Nicolas; Schreiber, Falk

    2012-08-01

    LibSBGN is a software library for reading, writing and manipulating Systems Biology Graphical Notation (SBGN) maps stored using the recently developed SBGN-ML file format. The library (available in C++ and Java) makes it easy for developers to add SBGN support to their tools, whereas the file format facilitates the exchange of maps between compatible software applications. The library also supports validation of maps, which simplifies the task of ensuring compliance with the detailed SBGN specifications. With this effort we hope to increase the adoption of SBGN in bioinformatics tools, ultimately enabling more researchers to visualize biological knowledge in a precise and unambiguous manner. Milestone 2 was released in December 2011. Source code, example files and binaries are freely available under the terms of either the LGPL v2.1+ or Apache v2.0 open source licenses from http://libsbgn.sourceforge.net. sbgn-libsbgn@lists.sourceforge.net.

  7. Flowfield computer graphics

    NASA Technical Reports Server (NTRS)

    Desautel, Richard

    1993-01-01

    The objectives of this research include supporting the Aerothermodynamics Branch's research by developing graphical visualization tools for both the branch's adaptive grid code and flow field ray tracing code. The completed research for the reporting period includes development of a graphical user interface (GUI) and its implementation into the NAS Flowfield Analysis Software Tool kit (FAST), for both the adaptive grid code (SAGE) and the flow field ray tracing code (CISS).

  8. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  9. Dynamic assessment of microbial ecology (DAME): a web app for interactive analysis and visualization of microbial sequencing data.

    PubMed

    Piccolo, Brian D; Wankhade, Umesh D; Chintapalli, Sree V; Bhattacharyya, Sudeepa; Chunqiao, Luo; Shankar, Kartik

    2018-03-15

    Dynamic assessment of microbial ecology (DAME) is a Shiny-based web application for interactive analysis and visualization of microbial sequencing data. DAME provides researchers not familiar with R programming the ability to access the most current R functions utilized for ecology and gene sequencing data analyses. Currently, DAME supports group comparisons of several ecological estimates of α-diversity and β-diversity, along with differential abundance analysis of individual taxa. Using the Shiny framework, the user has complete control of all aspects of the data analysis, including sample/experimental group selection and filtering, estimate selection, statistical methods and visualization parameters. Furthermore, graphical and tabular outputs are supported by R packages using D3.js and are fully interactive. DAME was implemented in R but can be modified by Hypertext Markup Language (HTML), Cascading Style Sheets (CSS), and JavaScript. It is freely available on the web at https://acnc-shinyapps.shinyapps.io/DAME/. Local installation and source code are available through Github (https://github.com/bdpiccolo/ACNC-DAME). Any system with R can launch DAME locally provided the shiny package is installed. bdpiccolo@uams.edu.

  10. CodonLogo: a sequence logo-based viewer for codon patterns.

    PubMed

    Sharma, Virag; Murphy, David P; Provan, Gregory; Baranov, Pavel V

    2012-07-15

    Conserved patterns across a multiple sequence alignment can be visualized by generating sequence logos. Sequence logos show each column in the alignment as stacks of symbol(s) where the height of a stack is proportional to its informational content, whereas the height of each symbol within the stack is proportional to its frequency in the column. Sequence logos use symbols of either nucleotide or amino acid alphabets. However, certain regulatory signals in messenger RNA (mRNA) act as combinations of codons. Yet no tool is available for visualization of conserved codon patterns. We present the first application which allows visualization of conserved regions in a multiple sequence alignment in the context of codons. CodonLogo is based on WebLogo3 and uses the same heuristics but treats codons as inseparable units of a 64-letter alphabet. CodonLogo can discriminate patterns of codon conservation from patterns of nucleotide conservation that appear indistinguishable in standard sequence logos. The CodonLogo source code and its implementation (in a local version of the Galaxy Browser) are available at http://recode.ucc.ie/CodonLogo and through the Galaxy Tool Shed at http://toolshed.g2.bx.psu.edu/.

  11. SieveSifter: a web-based tool for visualizing the sieve analyses of HIV-1 vaccine efficacy trials.

    PubMed

    Fiore-Gartland, Andrew; Kullman, Nicholas; deCamp, Allan C; Clenaghan, Graham; Yang, Wayne; Magaret, Craig A; Edlefsen, Paul T; Gilbert, Peter B

    2017-08-01

    Analysis of HIV-1 virions from participants infected in a randomized controlled preventive HIV-1 vaccine efficacy trial can help elucidate mechanisms of partial protection. By comparing the genetic sequence of viruses from vaccine and placebo recipients to the sequence of the vaccine itself, a technique called 'sieve analysis', one can identify functional specificities of vaccine-induced immune responses. We have created an interactive web-based visualization and data access tool for exploring the results of sieve analyses performed on four major preventive HIV-1 vaccine efficacy trials: (i) the HIV Vaccine Trial Network (HVTN) 502/Step trial, (ii) the RV144/Thai trial, (iii) the HVTN 503/Phambili trial and (iv) the HVTN 505 trial. The tool acts simultaneously as a platform for rapid reinterpretation of sieve effects and as a portal for organizing and sharing the viral sequence data. Access to these valuable datasets also enables the development of novel methodology for future sieve analyses. Visualization: http://sieve.fredhutch.org/viz . Source code: https://github.com/nkullman/SIEVE . Data API: http://sieve.fredhutch.org/data . agartlan@fredhutch.org. © The Author(s) 2017. Published by Oxford University Press.

  12. Data processing with microcode designed with source coding

    DOEpatents

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  13. Ground motion simulations in Marmara (Turkey) region from 3D finite difference method

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ulrich, Thomas; Douglas, John

    2016-04-01

    In the framework of the European project MARSite (2012-2016), one of the main contributions from our research team was to provide ground-motion simulations for the Marmara region from various earthquake source scenarios. We adopted a 3D finite difference code, taking into account the 3D structure around the Sea of Marmara (including the bathymetry) and the sea layer. We simulated two moderate earthquakes (about Mw4.5) and found that the 3D structure improves significantly the waveforms compared to the 1D layer model. Simulations were carried out for different earthquakes (moderate point sources and large finite sources) in order to provide shake maps (Aochi and Ulrich, BSSA, 2015), to study the variability of ground-motion parameters (Douglas & Aochi, BSSA, 2016) as well as to provide synthetic seismograms for the blind inversion tests (Diao et al., GJI, 2016). The results are also planned to be integrated in broadband ground-motion simulations, tsunamis generation and simulations of triggered landslides (in progress by different partners). The simulations are freely shared among the partners via the internet and the visualization of the results is diffused on the project's homepage. All these simulations should be seen as a reference for this region, as they are based on the latest knowledge that obtained during the MARSite project, although their refinement and validation of the model parameters and the simulations are a continuing research task relying on continuing observations. The numerical code used, the models and the simulations are available on demand.

  14. Visual Dysfunction Following Blast-Related Traumatic Brain Injury from the Battlefield

    DTIC Science & Technology

    2011-01-01

    and visual disorders is varied, depending on the diagnostic criteria, condition and patient popu- lation, but has primarily been studied in civilian... diagnostic codes for ‘disorders of the eye and adnexa’ (360.0– 379.9) obtained from electronic outpatient medical records (Standard Ambulatory Data Record) and...disorder diagnostic category by TBI status. ICD-9-CM code and categorya TBI (n¼ 837) Other injury (n¼1417) 360 Disorders of the globe 0 1 ɘ.1% 361

  15. Digital visual communications using a Perceptual Components Architecture

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1991-01-01

    The next era of space exploration will generate extraordinary volumes of image data, and management of this image data is beyond current technical capabilities. We propose a strategy for coding visual information that exploits the known properties of early human vision. This Perceptual Components Architecture codes images and image sequences in terms of discrete samples from limited bands of color, spatial frequency, orientation, and temporal frequency. This spatiotemporal pyramid offers efficiency (low bit rate), variable resolution, device independence, error-tolerance, and extensibility.

  16. Adaptive distributed source coding.

    PubMed

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  17. Phase II evaluation of clinical coding schemes: completeness, taxonomy, mapping, definitions, and clarity. CPRI Work Group on Codes and Structures.

    PubMed

    Campbell, J R; Carpenter, P; Sneiderman, C; Cohn, S; Chute, C G; Warren, J

    1997-01-01

    To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for "parent" and "child" codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p < .00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56, UMLS 3.17; READ 2.14, *p < .005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p < .00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p < .004) associated with a loss of clarity. No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. Is suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record.

  18. Pulse Sequence Programming in a Dynamic Visual Environment: SequenceTree

    PubMed Central

    Magland, Jeremy F.; Li, Cheng; Langham, Michael C.; Wehrli, Felix W.

    2015-01-01

    Purpose To describe SequenceTree (ST), an open source. integrated software environment for implementing MRI pulse sequences, and ideally exported them to actual MRI scanners. The software is a user-friendly alternative to vendor-supplied pulse sequence design and editing tools and is suited for non-programmers and programmers alike. Methods The integrated user interface was programmed using the Qt4/C++ toolkit. As parameters and code are modified, the pulse sequence diagram is automatically updated within the user interface. Several aspects of pulse programming are handled automatically allowing users to focus on higher-level aspects of sequence design. Sequences can be simulated using a built-in Bloch equation solver and then exported for use on a Siemens MRI scanner. Ideally other types of scanners will be supported in the future. Results The software has been used for eight years in the authors’ laboratory and elsewhere and has been utilized in more than fifty peer-reviewed publications in areas such as cardiovascular imaging, solid state and non-proton NMR, MR elastography, and high resolution structural imaging. Conclusion ST is an innovative, open source, visual pulse sequence environment for MRI combining simplicity with flexibility and is ideal for both advanced users and those with limited programming experience. PMID:25754837

  19. FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data.

    PubMed

    Muir, Dylan R; Kampa, Björn M

    2014-01-01

    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories.

  20. FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data

    PubMed Central

    Muir, Dylan R.; Kampa, Björn M.

    2015-01-01

    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories1. PMID:25653614

  1. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.

  2. Hazards of Colour Coding in Visual Approach Slope Indicators,

    DTIC Science & Technology

    1981-12-01

    the glideslope. The central spot (the ’ meatball ’) is displaced above or below the datum lights when the pilot views from above or below the...undershoot is increasing or decreasing, the step changes in intensity may also be evident as a form of flash coding. Colour coding of the ’ meatball " in

  3. Pour une pedagogie integree du code oral et du code ecrit (Toward a Pedagogy Integrating Oral and Written Codes).

    ERIC Educational Resources Information Center

    Guillen-Diaz, Carmen

    1990-01-01

    A classroom approach that brings oral and written language learning closer together is outlined. The strategy focuses on proper pronunciation using minimal pairs and uses exercises designed for listening and visualization, production, discrimination, re-use and reinforcement, and computer-assisted instruction. (MSE)

  4. Computer-Based Learning of Spelling Skills in Children with and without Dyslexia

    ERIC Educational Resources Information Center

    Kast, Monika; Baschera, Gian-Marco; Gross, Markus; Jancke, Lutz; Meyer, Martin

    2011-01-01

    Our spelling training software recodes words into multisensory representations comprising visual and auditory codes. These codes represent information about letters and syllables of a word. An enhanced version, developed for this study, contains an additional phonological code and an improved word selection controller relying on a phoneme-based…

  5. Effects of Action Relations on the Configural Coding between Objects

    ERIC Educational Resources Information Center

    Riddoch, M. J.; Pippard, B.; Booth, L.; Rickell, J.; Summers, J.; Brownson, A.; Humphreys, G. W.

    2011-01-01

    Configural coding is known to take place between the parts of individual objects but has never been shown between separate objects. We provide novel evidence here for configural coding between separate objects through a study of the effects of action relations between objects on extinction. Patients showing visual extinction were presented with…

  6. A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding

    ERIC Educational Resources Information Center

    Cuevas, Joshua; Dawson, Bryan L.

    2018-01-01

    This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…

  7. BigWig and BigBed: enabling browsing of large distributed datasets.

    PubMed

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  8. Excel2Genie: A Microsoft Excel application to improve the flexibility of the Genie-2000 Spectroscopic software.

    PubMed

    Forgács, Attila; Balkay, László; Trón, Lajos; Raics, Péter

    2014-12-01

    Excel2Genie, a simple and user-friendly Microsoft Excel interface, has been developed to the Genie-2000 Spectroscopic Software of Canberra Industries. This Excel application can directly control Canberra Multichannel Analyzer (MCA), process the acquired data and visualize them. Combination of Genie-2000 with Excel2Genie results in remarkably increased flexibility and a possibility to carry out repetitive data acquisitions even with changing parameters and more sophisticated analysis. The developed software package comprises three worksheets: display parameters and results of data acquisition, data analysis and mathematical operations carried out on the measured gamma spectra. At the same time it also allows control of these processes. Excel2Genie is freely available to assist gamma spectrum measurements and data evaluation by the interested Canberra users. With access to the Visual Basic Application (VBA) source code of this application users are enabled to modify the developed interface according to their intentions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    NASA Astrophysics Data System (ADS)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  10. An interactive environment for the analysis of large Earth observation and model data sets

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.

    1994-01-01

    Envision is an interactive environment that provides researchers in the earth sciences convenient ways to manage, browse, and visualize large observed or model data sets. Its main features are support for the netCDF and HDF file formats, an easy to use X/Motif user interface, a client-server configuration, and portability to many UNIX workstations. The Envision package also provides new ways to view and change metadata in a set of data files. It permits a scientist to conveniently and efficiently manage large data sets consisting of many data files. It also provides links to popular visualization tools so that data can be quickly browsed. Envision is a public domain package, freely available to the scientific community. Envision software (binaries and source code) and documentation can be obtained from either of these servers: ftp://vista.atmos.uiuc.edu/pub/envision/ and ftp://csrp.tamu.edu/pub/envision/. Detailed descriptions of Envision capabilities and operations can be found in the User's Guide and Reference Manuals distributed with Envision software.

  11. BIDDSAT: visualizing the content of biodiversity data publishers in the Global Biodiversity Information Facility network.

    PubMed

    Otegui, Javier; Ariño, Arturo H

    2012-08-15

    In any data quality workflow, data publishers must become aware of issues in their data so these can be corrected. User feedback mechanisms provide one avenue, while global assessments of datasets provide another. To date, there is no publicly available tool to allow both biodiversity data institutions sharing their data through the Global Biodiversity Information Facility network and its potential users to assess datasets as a whole. Contributing to bridge this gap both for publishers and users, we introduce BIoDiversity DataSets Assessment Tool, an online tool that enables selected diagnostic visualizations on the content of data publishers and/or their individual collections. The online application is accessible at http://www.unav.es/unzyec/mzna/biddsat/ and is supported by all major browsers. The source code is licensed under the GNU GPLv3 license (http://www.gnu.org/licenses/gpl-3.0.txt) and is available at https://github.com/jotegui/BIDDSAT.

  12. Dance expertise modulates visual sensitivity to complex biological movements.

    PubMed

    Orlandi, Andrea; Zani, Alberto; Proverbio, Alice Mado

    2017-09-01

    Motor resonance processes that occur when observing an individual perform an action may be modulated by acquired visuomotor expertise. We used the event-related potential (EEG/ERP) technique to investigate the ability to automatically recognize a subtle difference between very similar novel contemporary dance movements. Twelve professional dancers and twelve non-dancers were shown 212 pairs of videos of complex whole-body movements that lasted 3s. The second of each pair was the repetition of the previous movement or a slight variation of it (deviance). The participants were engaged in a secondary attentional task. Modulation of a larger centro-parietal N400 effect and a reduction of the Late Positivity amplitude (repetition suppression effect) were identified in response to deviant stimuli only in the dancers. Source reconstruction (swLORETA) showed activations in biological motion, body and face processing related areas, and fronto-parietal and limbic systems. The current findings provide evidence that acquired dance expertise modifies the ability to visually code whole-body complex movements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. SVGMap: configurable image browser for experimental data.

    PubMed

    Rafael-Palou, Xavier; Schroeder, Michael P; Lopez-Bigas, Nuria

    2012-01-01

    Spatial data visualization is very useful to represent biological data and quickly interpret the results. For instance, to show the expression pattern of a gene in different tissues of a fly, an intuitive approach is to draw the fly with the corresponding tissues and color the expression of the gene in each of them. However, the creation of these visual representations may be a burdensome task. Here we present SVGMap, a java application that automatizes the generation of high-quality graphics for singular data items (e.g. genes) and biological conditions. SVGMap contains a browser that allows the user to navigate the different images created and can be used as a web-based results publishing tool. SVGMap is freely available as precompiled java package as well as source code at http://bg.upf.edu/svgmap. It requires Java 6 and any recent web browser with JavaScript enabled. The software can be run on Linux, Mac OS X and Windows systems. nuria.lopez@upf.edu

  14. Letter Position Coding Across Modalities: The Case of Braille Readers

    PubMed Central

    Perea, Manuel; García-Chamorro, Cristina; Martín-Suesta, Miguel; Gómez, Pablo

    2012-01-01

    Background The question of how the brain encodes letter position in written words has attracted increasing attention in recent years. A number of models have recently been proposed to accommodate the fact that transposed-letter stimuli like jugde or caniso are perceptually very close to their base words. Methodology Here we examined how letter position coding is attained in the tactile modality via Braille reading. The idea is that Braille word recognition may provide more serial processing than the visual modality, and this may produce differences in the input coding schemes employed to encode letters in written words. To that end, we conducted a lexical decision experiment with adult Braille readers in which the pseudowords were created by transposing/replacing two letters. Principal Findings We found a word-frequency effect for words. In addition, unlike parallel experiments in the visual modality, we failed to find any clear signs of transposed-letter confusability effects. This dissociation highlights the differences between modalities. Conclusions The present data argue against models of letter position coding that assume that transposed-letter effects (in the visual modality) occur at a relatively late, abstract locus. PMID:23071522

  15. Vector and Raster Data Storage Based on Morton Code

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.

    2018-05-01

    Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.

  16. The Particle Accelerator Simulation Code PyORBIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less

  17. Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool

    NASA Astrophysics Data System (ADS)

    Torlapati, Jagadish; Prabhakar Clement, T.

    2013-01-01

    We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.

  18. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  19. Spatial Tuning Shifts Increase the Discriminability and Fidelity of Population Codes in Visual Cortex

    PubMed Central

    2017-01-01

    Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings. PMID:28242794

  20. A Cloud Based Framework For Monitoring And Predicting Subsurface System Behaviour

    NASA Astrophysics Data System (ADS)

    Versteeg, R. J.; Rodzianko, A.; Johnson, D. V.; Soltanian, M. R.; Dwivedi, D.; Dafflon, B.; Tran, A. P.; Versteeg, O. J.

    2015-12-01

    Subsurface system behavior is driven and controlled by the interplay of physical, chemical, and biological processes which occur at multiple temporal and spatial scales. Capabilities to monitor, understand and predict this behavior in an effective and timely manner are needed for both scientific purposes and for effective subsurface system management. Such capabilities require three elements: Models, Data and an enabling cyberinfrastructure, which allow users to use these models and data in an effective manner. Under a DOE Office of Science funded STTR award Subsurface Insights and LBNL have designed and implemented a cloud based predictive assimilation framework (PAF) which automatically ingests, controls quality and stores heterogeneous physical and chemical subsurface data and processes these data using different inversion and modeling codes to provide information on the current state and evolution of subsurface systems. PAF is implemented as a modular cloud based software application with five components: (1) data acquisition, (2) data management, (3) data assimilation and processing, (4) visualization and result delivery and (5) orchestration. Serverside PAF uses ZF2 (a PHP web application framework) and Python and both open source (ODM2) and in house developed data models. Clientside PAF uses CSS and JS to allow for interactive data visualization and analysis. Client side modularity (which allows for a responsive interface) of the system is achieved by implementing each core capability of PAF (such as data visualization, user configuration and control, electrical geophysical monitoring and email/SMS alerts on data streams) as a SPA (Single Page Application). One of the recent enhancements is the full integration of a number of flow and mass transport and parameter estimation codes (e.g., MODFLOW, MT3DMS, PHT3D, TOUGH, PFLOTRAN) in this framework. This integration allows for autonomous and user controlled modeling of hydrological and geochemical processes. In our presentation we will discuss our software architecture and present the results of using these codes and the overall developed performance of our framework using hydrological, geochemical and geophysical data from the LBNL SFA2 Rifle field site.

  1. Visual analysis of inter-process communication for large-scale parallel computing.

    PubMed

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  2. Adaptation and perceptual norms

    NASA Astrophysics Data System (ADS)

    Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole

    2007-02-01

    We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.

  3. Practical Formal Verification of MPI and Thread Programs

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan, Ganesh; Kirby, Robert M.

    Large-scale simulation codes in science and engineering are written using the Message Passing Interface (MPI). Shared memory threads are widely used directly, or to implement higher level programming abstractions. Traditional debugging methods for MPI or thread programs are incapable of providing useful formal guarantees about coverage. They get bogged down in the sheer number of interleavings (schedules), often missing shallow bugs. In this tutorial we will introduce two practical formal verification tools: ISP (for MPI C programs) and Inspect (for Pthread C programs). Unlike other formal verification tools, ISP and Inspect run directly on user source codes (much like a debugger). They pursue only the relevant set of process interleavings, using our own customized Dynamic Partial Order Reduction algorithms. For a given test harness, DPOR allows these tools to guarantee the absence of deadlocks, instrumented MPI object leaks and communication races (using ISP), and shared memory races (using Inspect). ISP and Inspect have been used to verify large pieces of code: in excess of 10,000 lines of MPI/C for ISP in under 5 seconds, and about 5,000 lines of Pthread/C code in a few hours (and much faster with the use of a cluster or by exploiting special cases such as symmetry) for Inspect. We will also demonstrate the Microsoft Visual Studio and Eclipse Parallel Tools Platform integrations of ISP (these will be available on the LiveCD).

  4. Impacts of DNAPL Source Treatment: Experimental and Modeling Assessment of the Benefits of Partial DNAPL Source Removal

    DTIC Science & Technology

    2009-09-01

    nuclear industry for conducting performance assessment calculations. The analytical FORTRAN code for the DNAPL source function, REMChlor, was...project. The first was to apply existing deterministic codes , such as T2VOC and UTCHEM, to the DNAPL source zone to simulate the remediation processes...but describe the spatial variability of source zones unlike one-dimensional flow and transport codes that assume homogeneity. The Lagrangian models

  5. Dynamics of normalization underlying masking in human visual cortex.

    PubMed

    Tsai, Jeffrey J; Wade, Alex R; Norcia, Anthony M

    2012-02-22

    Stimulus visibility can be reduced by other stimuli that overlap the same region of visual space, a process known as masking. Here we studied the neural mechanisms of masking in humans using source-imaged steady state visual evoked potentials and frequency-domain analysis over a wide range of relative stimulus strengths of test and mask stimuli. Test and mask stimuli were tagged with distinct temporal frequencies and we quantified spectral response components associated with the individual stimuli (self terms) and responses due to interaction between stimuli (intermodulation terms). In early visual cortex, masking alters the self terms in a manner consistent with a reduction of input contrast. We also identify a novel signature of masking: a robust intermodulation term that peaks when the test and mask stimuli have equal contrast and disappears when they are widely different. We fit all of our data simultaneously with family of a divisive gain control models that differed only in their dynamics. Models with either very short or very long temporal integration constants for the gain pool performed worse than a model with an integration time of ∼30 ms. Finally, the absolute magnitudes of the response were controlled by the ratio of the stimulus contrasts, not their absolute values. This contrast-contrast invariance suggests that many neurons in early visual cortex code relative rather than absolute contrast. Together, these results provide a more complete description of masking within the normalization framework of contrast gain control and suggest that contrast normalization accomplishes multiple functional goals.

  6. Interactive metagenomic visualization in a Web browser.

    PubMed

    Ondov, Brian D; Bergman, Nicholas H; Phillippy, Adam M

    2011-09-30

    A critical output of metagenomic studies is the estimation of abundances of taxonomical or functional groups. The inherent uncertainty in assignments to these groups makes it important to consider both their hierarchical contexts and their prediction confidence. The current tools for visualizing metagenomic data, however, omit or distort quantitative hierarchical relationships and lack the facility for displaying secondary variables. Here we present Krona, a new visualization tool that allows intuitive exploration of relative abundances and confidences within the complex hierarchies of metagenomic classifications. Krona combines a variant of radial, space-filling displays with parametric coloring and interactive polar-coordinate zooming. The HTML5 and JavaScript implementation enables fully interactive charts that can be explored with any modern Web browser, without the need for installed software or plug-ins. This Web-based architecture also allows each chart to be an independent document, making them easy to share via e-mail or post to a standard Web server. To illustrate Krona's utility, we describe its application to various metagenomic data sets and its compatibility with popular metagenomic analysis tools. Krona is both a powerful metagenomic visualization tool and a demonstration of the potential of HTML5 for highly accessible bioinformatic visualizations. Its rich and interactive displays facilitate more informed interpretations of metagenomic analyses, while its implementation as a browser-based application makes it extremely portable and easily adopted into existing analysis packages. Both the Krona rendering code and conversion tools are freely available under a BSD open-source license, and available from: http://krona.sourceforge.net.

  7. Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot

    PubMed Central

    Raman, Rajani; Sarkar, Sandip

    2016-01-01

    Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812

  8. A biologically inspired neural model for visual and proprioceptive integration including sensory training.

    PubMed

    Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi

    2013-12-01

    Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model simulation.

  9. Semantic and visual memory codes in learning disabled readers.

    PubMed

    Swanson, H L

    1984-02-01

    Two experiments investigated whether learning disabled readers' impaired recall is due to multiple coding deficiencies. In Experiment 1, learning disabled and skilled readers viewed nonsense pictures without names or with either relevant or irrelevant names with respect to the distinctive characteristics of the picture. Both types of names improved recall of nondisabled readers, while learning disabled readers exhibited better recall for unnamed pictures. No significant difference in recall was found between name training (relevant, irrelevant) conditions within reading groups. In Experiment 2, both reading groups participated in recall training for complex visual forms labeled with unrelated words, hierarchically related words, or without labels. A subsequent reproduction transfer task showed a facilitation in performance in skilled readers due to labeling, with learning disabled readers exhibiting better reproduction for unnamed pictures. Measures of output organization (clustering) indicated that recall is related to the development of superordinate categories. The results suggest that learning disabled children's reading difficulties are due to an inability to activate a semantic representation that interconnects visual and verbal codes.

  10. Cognitive/emotional models for human behavior representation in 3D avatar simulations

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.

  11. Large-scale two-photon imaging revealed super-sparse population codes in the V1 superficial layer of awake monkeys.

    PubMed

    Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing

    2018-04-26

    One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.

  12. Working memory: a developmental study of phonological recoding.

    PubMed

    Palmer, S

    2000-05-01

    A cross-sectional study using children aged 3 to 7 years and a cross-sequential study using children aged between 5 and 8 years showed that the development of phonological recoding in working memory was more complex than the simple dichotomous picture portrayed in the current literature. It appears that initially children use no strategy in recall, which is proposed to represent the level of automatic activation of representations in long-term memory and the storage capacity of the central executive. This is followed by a period in which a visual strategy prevails, followed by a period of dual visual-verbal coding before the adult-like strategy of verbal coding finally emerges. The results are discussed in terms of three working memory models (Baddeley, 1990; Engle, 1996; Logie, 1996) where strategy use is seen as the development of attentional processes and phonological recoding as the development of inhibitory mechanisms in the central executive to suppress the habitual response set of visual coding.

  13. Effects of visual cue and response assignment on spatial stimulus coding in stimulus-response compatibility.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2012-01-01

    Tlauka and McKenna ( 2000 ) reported a reversal of the traditional stimulus-response compatibility (SRC) effect (faster responding to a stimulus presented on the same side than to one on the opposite side) when the stimulus appearing on one side of a display is a member of a superordinate unit that is largely on the opposite side. We investigated the effects of a visual cue that explicitly shows a superordinate unit, and of assignment of multiple stimuli within each superordinate unit to one response, on the SRC effect based on superordinate unit position. Three experiments revealed that stimulus-response assignment is critical, while the visual cue plays a minor role, in eliciting the SRC effect based on the superordinate unit position. Findings suggest bidirectional interaction between perception and action and simultaneous spatial stimulus coding according to multiple frames of reference, with contribution of each coding to the SRC effect flexibly varying with task situations.

  14. Phase II Evaluation of Clinical Coding Schemes

    PubMed Central

    Campbell, James R.; Carpenter, Paul; Sneiderman, Charles; Cohn, Simon; Chute, Christopher G.; Warren, Judith

    1997-01-01

    Abstract Objective: To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). Methods: The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for “parent” and “child” codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. Results: SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p <.00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56; UMLS 3.17; READ 2.14, *p <.005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p <. 00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p <. 004) associated with a loss of clarity. Conclusion: No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. It suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record. PMID:9147343

  15. Toward a New Theory for Selecting Instructional Visuals.

    ERIC Educational Resources Information Center

    Croft, Richard S.; Burton, John K.

    This paper provides a rationale for the selection of illustrations and visual aids for the classroom. The theories that describe the processing of visuals are dual coding theory and cue summation theory. Concept attainment theory offers a basis for selecting which cues are relevant for any learning task which includes a component of identification…

  16. Visual Information Literacy: Reading a Documentary Photograph

    ERIC Educational Resources Information Center

    Abilock, Debbie

    2008-01-01

    Like a printed text, an architectural blueprint, a mathematical equation, or a musical score, a visual image is its own language. Visual literacy has three components: (1) learning; (2) thinking; and (3) communicating. A "literate" person is able to decipher the basic code and syntax, interpret the signs and symbols, correctly apply terms from an…

  17. A Visual Analysis of the 1980 Houston Republican Presidential Primary Debate.

    ERIC Educational Resources Information Center

    Hellweg, Susan A.; Phillips, Steven L.

    In partial replication of an analysis of the 1976 presidential campaign debates, two researchers analyzed the debate between Republican presidential candidates Ronald Reagan and George Bush (Houston, April 23, 1980) for its visual features, (amount and type of camera shots). The visual categories by which camera shots were coded included…

  18. The Use of Final-Letter Braille Contractions: A Case Study

    ERIC Educational Resources Information Center

    Tallon, Emily M.; Herzberg, Tina S.

    2013-01-01

    Louis Braille developed a six-dot braille code in the early 1800s, thus creating an effective way for persons who are visually impaired to communicate through reading and writing (Holbrook, D'Andrea, & Sanford, 2011). Students with visual impairments require braille instruction from teachers of students with visual impairments, who are responsible…

  19. Comprehending News Videotexts: The Influence of the Visual Content

    ERIC Educational Resources Information Center

    Cross, Jeremy

    2011-01-01

    Informed by dual coding theory, this study explores the role of the visual content in L2 listeners' comprehension of news videotexts. L1 research into the visual characteristics and comprehension of news videotexts is outlined, subsequently informing the quantitative analysis of audiovisual correspondence in the news videotexts used. In each of…

  20. On the assessment of visual communication by information theory

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1993-01-01

    This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

  1. Doclet To Synthesize UML

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    The RoseDoclet computer program extends the capability of Java doclet software to automatically synthesize Unified Modeling Language (UML) content from Java language source code. [Doclets are Java-language programs that use the doclet application programming interface (API) to specify the content and format of the output of Javadoc. Javadoc is a program, originally designed to generate API documentation from Java source code, now also useful as an extensible engine for processing Java source code.] RoseDoclet takes advantage of Javadoc comments and tags already in the source code to produce a UML model of that code. RoseDoclet applies the doclet API to create a doclet passed to Javadoc. The Javadoc engine applies the doclet to the source code, emitting the output format specified by the doclet. RoseDoclet emits a Rose model file and populates it with fully documented packages, classes, methods, variables, and class diagrams identified in the source code. The way in which UML models are generated can be controlled by use of new Javadoc comment tags that RoseDoclet provides. The advantage of using RoseDoclet is that Javadoc documentation becomes leveraged for two purposes: documenting the as-built API and keeping the design documentation up to date.

  2. shiftNMFk 1.1: Robust Nonnegative matrix factorization with kmeans clustering and signal shift, for allocation of unknown physical sources, toy version for open sourcing with publications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandrov, Boian S.; Lliev, Filip L.; Stanev, Valentin G.

    This code is a toy (short) version of CODE-2016-83. From a general perspective, the code represents an unsupervised adaptive machine learning algorithm that allows efficient and high performance de-mixing and feature extraction of a multitude of non-negative signals mixed and recorded by a network of uncorrelated sensor arrays. The code identifies the number of the mixed original signals and their locations. Further, the code also allows deciphering of signals that have been delayed in regards to the mixing process in each sensor. This code is high customizable and it can be efficiently used for a fast macro-analyses of data. Themore » code is applicable to a plethora of distinct problems: chemical decomposition, pressure transient decomposition, unknown sources/signal allocation, EM signal decomposition. An additional procedure for allocation of the unknown sources is incorporated in the code.« less

  3. 3D topology of orientation columns in visual cortex revealed by functional optical coherence tomography.

    PubMed

    Nakamichi, Yu; Kalatsky, Valery A; Watanabe, Hideyuki; Sato, Takayuki; Rajagopalan, Uma Maheswari; Tanifuji, Manabu

    2018-04-01

    Orientation tuning is a canonical neuronal response property of six-layer visual cortex that is encoded in pinwheel structures with center orientation singularities. Optical imaging of intrinsic signals enables us to map these surface two-dimensional (2D) structures, whereas lack of appropriate techniques has not allowed us to visualize depth structures of orientation coding. In the present study, we performed functional optical coherence tomography (fOCT), a technique capable of acquiring a 3D map of the intrinsic signals, to study the topology of orientation coding inside the cat visual cortex. With this technique, for the first time, we visualized columnar assemblies in orientation coding that had been predicted from electrophysiological recordings. In addition, we found that the columnar structures were largely distorted around pinwheel centers: center singularities were not rigid straight lines running perpendicularly to the cortical surface but formed twisted string-like structures inside the cortex that turned and extended horizontally through the cortex. Looping singularities were observed with their respective termini accessing the same cortical surface via clockwise and counterclockwise orientation pinwheels. These results suggest that a 3D topology of orientation coding cannot be fully anticipated from 2D surface measurements. Moreover, the findings demonstrate the utility of fOCT as an in vivo mesoscale imaging method for mapping functional response properties of cortex in the depth axis. NEW & NOTEWORTHY We used functional optical coherence tomography (fOCT) to visualize three-dimensional structure of the orientation columns with millimeter range and micrometer spatial resolution. We validated vertically elongated columnar structure in iso-orientation domains. The columnar structure was distorted around pinwheel centers. An orientation singularity formed a string with tortuous trajectories inside the cortex and connected clockwise and counterclockwise pinwheel centers in the surface orientation map. The results were confirmed by comparisons with conventional optical imaging and electrophysiological recordings.

  4. Toward unsupervised outbreak detection through visual perception of new patterns

    PubMed Central

    Lévy, Pierre P; Valleron, Alain-Jacques

    2009-01-01

    Background Statistical algorithms are routinely used to detect outbreaks of well-defined syndromes, such as influenza-like illness. These methods cannot be applied to the detection of emerging diseases for which no preexisting information is available. This paper presents a method aimed at facilitating the detection of outbreaks, when there is no a priori knowledge of the clinical presentation of cases. Methods The method uses a visual representation of the symptoms and diseases coded during a patient consultation according to the International Classification of Primary Care 2nd version (ICPC-2). The surveillance data are transformed into color-coded cells, ranging from white to red, reflecting the increasing frequency of observed signs. They are placed in a graphic reference frame mimicking body anatomy. Simple visual observation of color-change patterns over time, concerning a single code or a combination of codes, enables detection in the setting of interest. Results The method is demonstrated through retrospective analyses of two data sets: description of the patients referred to the hospital by their general practitioners (GPs) participating in the French Sentinel Network and description of patients directly consulting at a hospital emergency department (HED). Informative image color-change alert patterns emerged in both cases: the health consequences of the August 2003 heat wave were visualized with GPs' data (but passed unnoticed with conventional surveillance systems), and the flu epidemics, which are routinely detected by standard statistical techniques, were recognized visually with HED data. Conclusion Using human visual pattern-recognition capacities to detect the onset of unexpected health events implies a convenient image representation of epidemiological surveillance and well-trained "epidemiology watchers". Once these two conditions are met, one could imagine that the epidemiology watchers could signal epidemiological alerts, based on "image walls" presenting the local, regional and/or national surveillance patterns, with specialized field epidemiologists assigned to validate the signals detected. PMID:19515246

  5. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Explaining neural signals in human visual cortex with an associative learning model.

    PubMed

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  7. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    NASA Astrophysics Data System (ADS)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  8. Web-based network analysis and visualization using CellMaps

    PubMed Central

    Salavert, Francisco; García-Alonso, Luz; Sánchez, Rubén; Alonso, Roberto; Bleda, Marta; Medina, Ignacio; Dopazo, Joaquín

    2016-01-01

    Summary: CellMaps is an HTML5 open-source web tool that allows displaying, editing, exploring and analyzing biological networks as well as integrating metadata into them. Computations and analyses are remotely executed in high-end servers, and all the functionalities are available through RESTful web services. CellMaps can easily be integrated in any web page by using an available JavaScript API. Availability and Implementation: The application is available at: http://cellmaps.babelomics.org/ and the code can be found in: https://github.com/opencb/cell-maps. The client is implemented in JavaScript and the server in C and Java. Contact: jdopazo@cipf.es Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27296979

  9. Web-based network analysis and visualization using CellMaps.

    PubMed

    Salavert, Francisco; García-Alonso, Luz; Sánchez, Rubén; Alonso, Roberto; Bleda, Marta; Medina, Ignacio; Dopazo, Joaquín

    2016-10-01

    : CellMaps is an HTML5 open-source web tool that allows displaying, editing, exploring and analyzing biological networks as well as integrating metadata into them. Computations and analyses are remotely executed in high-end servers, and all the functionalities are available through RESTful web services. CellMaps can easily be integrated in any web page by using an available JavaScript API. The application is available at: http://cellmaps.babelomics.org/ and the code can be found in: https://github.com/opencb/cell-maps The client is implemented in JavaScript and the server in C and Java. jdopazo@cipf.es Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  10. The Infobiotics Workbench: an integrated in silico modelling platform for Systems and Synthetic Biology.

    PubMed

    Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio

    2011-12-01

    The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.

  11. Programming (Tips) for Physicists & Engineers

    ScienceCinema

    Ozcan, Erkcan

    2018-02-19

    Programming for today's physicists and engineers. Work environment: today's astroparticle, accelerator experiments and information industry rely on large collaborations. Need more than ever: code sharing/resuse, code building--framework integration, documentation and good visualization, working remotely, not reinventing the wheel.

  12. Programming (Tips) for Physicists & Engineers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozcan, Erkcan

    2010-07-13

    Programming for today's physicists and engineers. Work environment: today's astroparticle, accelerator experiments and information industry rely on large collaborations. Need more than ever: code sharing/resuse, code building--framework integration, documentation and good visualization, working remotely, not reinventing the wheel.

  13. An alpha-numeric code for representing N-linked glycan structures in secreted glycoproteins.

    PubMed

    Yusufi, Faraaz Noor Khan; Park, Wonjun; Lee, May May; Lee, Dong-Yup

    2009-01-01

    Advances in high-throughput techniques have led to the creation of increasing amounts of glycome data. The storage and analysis of this data would benefit greatly from a compact notation for describing glycan structures that can be easily stored and interpreted by computers. Towards this end, we propose a fixed-length alpha-numeric code for representing N-linked glycan structures commonly found in secreted glycoproteins from mammalian cell cultures. This code, GlycoDigit, employs a pre-assigned alpha-numeric index to represent the monosaccharides attached in different branches to the core glycan structure. The present branch-centric representation allows us to visualize the structure while the numerical nature of the code makes it machine readable. In addition, a difference operator can be defined to quantitatively differentiate between glycan structures for further analysis. The usefulness and applicability of GlycoDigit were demonstrated by constructing and visualizing an N-linked glycosylation network.

  14. Temporal Processing in the Olfactory System: Can We See a Smell?

    PubMed Central

    Gire, David H.; Restrepo, Diego; Sejnowski, Terrence J.; Greer, Charles; De Carlos, Juan A.; Lopez-Mascaraque, Laura

    2013-01-01

    Sensory processing circuits in the visual and olfactory systems receive input from complex, rapidly changing environments. Although patterns of light and plumes of odor create different distributions of activity in the retina and olfactory bulb, both structures use what appears on the surface similar temporal coding strategies to convey information to higher areas in the brain. We compare temporal coding in the early stages of the olfactory and visual systems, highlighting recent progress in understanding the role of time in olfactory coding during active sensing by behaving animals. We also examine studies that address the divergent circuit mechanisms that generate temporal codes in the two systems, and find that they provide physiological information directly related to functional questions raised by neuroanatomical studies of Ramon y Cajal over a century ago. Consideration of differences in neural activity in sensory systems contributes to generating new approaches to understand signal processing. PMID:23664611

  15. Use of fluorescent proteins and color-coded imaging to visualize cancer cells with different genetic properties.

    PubMed

    Hoffman, Robert M

    2016-03-01

    Fluorescent proteins are very bright and available in spectrally-distinct colors, enable the imaging of color-coded cancer cells growing in vivo and therefore the distinction of cancer cells with different genetic properties. Non-invasive and intravital imaging of cancer cells with fluorescent proteins allows the visualization of distinct genetic variants of cancer cells down to the cellular level in vivo. Cancer cells with increased or decreased ability to metastasize can be distinguished in vivo. Gene exchange in vivo which enables low metastatic cancer cells to convert to high metastatic can be color-coded imaged in vivo. Cancer stem-like and non-stem cells can be distinguished in vivo by color-coded imaging. These properties also demonstrate the vast superiority of imaging cancer cells in vivo with fluorescent proteins over photon counting of luciferase-labeled cancer cells.

  16. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third-party library to provide hydrologic flow, energy transport, and biogeochemical capability to the community land model, CLM, part of the open-source community earth system model (CESM) for climate. In this presentation, the advantages and disadvantages of open source software development in support of geoscience research at government laboratories, universities, and the private sector are discussed. Since the code is open-source (i.e. it's transparent and readily available to competitors), the PFLOTRAN team's development strategy within a competitive research environment is presented. Finally, the developers discuss their approach to object-oriented programming and the leveraging of modern Fortran in support of collaborative geoscience research as the Fortran standard evolves among compiler vendors.

  17. Multidimensional incremental parsing for universal source coding.

    PubMed

    Bae, Soo Hyun; Juang, Biing-Hwang

    2008-10-01

    A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.

  18. Pathogen metadata platform: software for accessing and analyzing pathogen strain information.

    PubMed

    Chang, Wenling E; Peterson, Matthew W; Garay, Christopher D; Korves, Tonia

    2016-09-15

    Pathogen metadata includes information about where and when a pathogen was collected and the type of environment it came from. Along with genomic nucleotide sequence data, this metadata is growing rapidly and becoming a valuable resource not only for research but for biosurveillance and public health. However, current freely available tools for analyzing this data are geared towards bioinformaticians and/or do not provide summaries and visualizations needed to readily interpret results. We designed a platform to easily access and summarize data about pathogen samples. The software includes a PostgreSQL database that captures metadata useful for disease outbreak investigations, and scripts for downloading and parsing data from NCBI BioSample and BioProject into the database. The software provides a user interface to query metadata and obtain standardized results in an exportable, tab-delimited format. To visually summarize results, the user interface provides a 2D histogram for user-selected metadata types and mapping of geolocated entries. The software is built on the LabKey data platform, an open-source data management platform, which enables developers to add functionalities. We demonstrate the use of the software in querying for a pathogen serovar and for genome sequence identifiers. This software enables users to create a local database for pathogen metadata, populate it with data from NCBI, easily query the data, and obtain visual summaries. Some of the components, such as the database, are modular and can be incorporated into other data platforms. The source code is freely available for download at https://github.com/wchangmitre/bioattribution .

  19. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  20. An Efficient Variable Length Coding Scheme for an IID Source

    NASA Technical Reports Server (NTRS)

    Cheung, K. -M.

    1995-01-01

    A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.

  1. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  2. Source Code Plagiarism--A Student Perspective

    ERIC Educational Resources Information Center

    Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.

    2011-01-01

    This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…

  3. Visual information processing; Proceedings of the Meeting, Orlando, FL, Apr. 20-22, 1992

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1992-01-01

    Topics discussed in these proceedings include nonlinear processing and communications; feature extraction and recognition; image gathering, interpolation, and restoration; image coding; and wavelet transform. Papers are presented on noise reduction for signals from nonlinear systems; driving nonlinear systems with chaotic signals; edge detection and image segmentation of space scenes using fractal analyses; a vision system for telerobotic operation; a fidelity analysis of image gathering, interpolation, and restoration; restoration of images degraded by motion; and information, entropy, and fidelity in visual communication. Attention is also given to image coding methods and their assessment, hybrid JPEG/recursive block coding of images, modified wavelets that accommodate causality, modified wavelet transform for unbiased frequency representation, and continuous wavelet transform of one-dimensional signals by Fourier filtering.

  4. Neuronal population coding of perceived and memorized visual features in the lateral prefrontal cortex

    PubMed Central

    Mendoza-Halliday, Diego; Martinez-Trujillo, Julio C.

    2017-01-01

    The primate lateral prefrontal cortex (LPFC) encodes visual stimulus features while they are perceived and while they are maintained in working memory. However, it remains unclear whether perceived and memorized features are encoded by the same or different neurons and population activity patterns. Here we record LPFC neuronal activity while monkeys perceive the motion direction of a stimulus that remains visually available, or memorize the direction if the stimulus disappears. We find neurons with a wide variety of combinations of coding strength for perceived and memorized directions: some neurons encode both to similar degrees while others preferentially or exclusively encode either one. Reading out the combined activity of all neurons, a machine-learning algorithm reliably decode the motion direction and determine whether it is perceived or memorized. Our results indicate that a functionally diverse population of LPFC neurons provides a substrate for discriminating between perceptual and mnemonic representations of visual features. PMID:28569756

  5. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  6. Hybrid concatenated codes and iterative decoding

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Pollara, Fabrizio (Inventor)

    2000-01-01

    Several improved turbo code apparatuses and methods. The invention encompasses several classes: (1) A data source is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each encoder outputs a code element which may be transmitted or stored. A parallel decoder provides the ability to decode the code elements to derive the original source information d without use of a received data signal corresponding to d. The output may be coupled to a multilevel trellis-coded modulator (TCM). (2) A data source d is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each of the encoders outputs a code element. In addition, the original data source d is output from the encoder. All of the output elements are coupled to a TCM. (3) At least two data sources are applied to two or more encoders with an interleaver between each source and each of the second and subsequent encoders. The output may be coupled to a TCM. (4) At least two data sources are applied to two or more encoders with at least two interleavers between each source and each of the second and subsequent encoders. (5) At least one data source is applied to one or more serially linked encoders through at least one interleaver. The output may be coupled to a TCM. The invention includes a novel way of terminating a turbo coder.

  7. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  8. Combining Open-Source Packages for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Schmidt, Albrecht; Grieger, Björn; Völk, Stefan

    2015-04-01

    The science planning of the ESA Rosetta mission has presented challenges which were addressed with combining various open-source software packages, such as the SPICE toolkit, the Python language and the Web graphics library three.js. The challenge was to compute certain parameters from a pool of trajectories and (possible) attitudes to describe the behaviour of the spacecraft. To be able to do this declaratively and efficiently, a C library was implemented that allows to interface the SPICE toolkit for geometrical computations from the Python language and process as much data as possible during one subroutine call. To minimise the lines of code one has to write special care was taken to ensure that the bindings were idiomatic and thus integrate well into the Python language and ecosystem. When done well, this very much simplifies the structure of the code and facilitates the testing for correctness by automatic test suites and visual inspections. For rapid visualisation and confirmation of correctness of results, the geometries were visualised with the three.js library, a popular Javascript library for displaying three-dimensional graphics in a Web browser. Programmatically, this was achieved by generating data files from SPICE sources that were included into templated HTML and displayed by a browser, thus made easily accessible to interested parties at large. As feedback came and new ideas were to be explored, the authors benefited greatly from the design of the Python-to-SPICE library which allowed the expression of algorithms to be concise and easier to communicate. In summary, by combining several well-established open-source tools, we were able to put together a flexible computation and visualisation environment that helped communicate and build confidence in planning ideas.

  9. Secure web-based invocation of large-scale plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.

    2004-12-01

    We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.

  10. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  11. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  12. Impaired Letter-String Processing in Developmental Dyslexia: What Visual-to-Phonology Code Mapping Disorder?

    ERIC Educational Resources Information Center

    Valdois, Sylviane; Lassus-Sangosse, Delphine; Lobier, Muriel

    2012-01-01

    Poor parallel letter-string processing in developmental dyslexia was taken as evidence of poor visual attention (VA) span, that is, a limitation of visual attentional resources that affects multi-character processing. However, the use of letter stimuli in oral report tasks was challenged on its capacity to highlight a VA span disorder. In…

  13. Shape Similarity, Better than Semantic Membership, Accounts for the Structure of Visual Object Representations in a Population of Monkey Inferotemporal Neurons

    PubMed Central

    DiCarlo, James J.; Zecchina, Riccardo; Zoccolan, Davide

    2013-01-01

    The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex. PMID:23950700

  14. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zehtabian, M; Zaker, N; Sina, S

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 whichmore » is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.« less

  15. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    ERIC Educational Resources Information Center

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  16. Finding Resolution for the Responsible Transparency of Economic Models in Health and Medicine.

    PubMed

    Padula, William V; McQueen, Robert Brett; Pronovost, Peter J

    2017-11-01

    The Second Panel on Cost-Effectiveness in Health and Medicine recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses has a number of questions unanswered with respect to the implementation of transparent, open source code interface for economic models. The possibility of making economic model source code could be positive and progressive for the field; however, several unintended consequences of this system should be first considered before complete implementation of this model. First, there is the concern regarding intellectual property rights that modelers have to their analyses. Second, the open source code could make analyses more accessible to inexperienced modelers, leading to inaccurate or misinterpreted results. We propose several resolutions to these concerns. The field should establish a licensing system of open source code such that the model originators maintain control of the code use and grant permissions to other investigators who wish to use it. The field should also be more forthcoming towards the teaching of cost-effectiveness analysis in medical and health services education so that providers and other professionals are familiar with economic modeling and able to conduct analyses with open source code. These types of unintended consequences need to be fully considered before the field's preparedness to move forward into an era of model transparency with open source code.

  17. Sensor system for heart sound biomonitor

    NASA Astrophysics Data System (ADS)

    Maple, Jarrad L.; Hall, Leonard T.; Agzarian, John; Abbott, Derek

    1999-09-01

    Heart sounds can be utilized more efficiently by medical doctors when they are displayed visually, rather than through a conventional stethoscope. A system whereby a digital stethoscope interfaces directly to a PC will be directly along with signal processing algorithms, adopted. The sensor is based on a noise cancellation microphone, with a 450 Hz bandwidth and is sampled at 2250 samples/sec with 12-bit resolution. Further to this, we discuss for comparison a piezo-based sensor with a 1 kHz bandwidth. A major problem is that the recording of the heart sound into these devices is subject to unwanted background noise which can override the heart sound and results in a poor visual representation. This noise originates from various sources such as skin contact with the stethoscope diaphragm, lung sounds, and other surrounding sounds such as speech. Furthermore we demonstrate a solution using 'wavelet denoising'. The wavelet transform is used because of the similarity between the shape of wavelets and the time-domain shape of a heartbeat sound. Thus coding of the waveform into the wavelet domain is achieved with relatively few wavelet coefficients, in contrast to the many Fourier components that would result from conventional decomposition. We show that the background noise can be dramatically reduced by a thresholding operation in the wavelet domain. The principle is that the background noise codes into many small broadband wavelet coefficients that can be removed without significant degradation of the signal of interest.

  18. Visual Thinking in Teaching History: Reading the Visual Thinking Skills of 12 Year-Old Pupils in Istanbul

    ERIC Educational Resources Information Center

    Dilek, Gulcin

    2010-01-01

    This study aims to explore the visual thinking skills of some sixth grade (12-13 year-old) primary pupils who created visual interpretations during history courses. Pupils drew pictures describing historical scenes or events based on visual sources. They constructed these illustrations by using visual and written primary and secondary sources in…

  19. 40 CFR 51.50 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude data for point sources. The six codes and their definitions are: (1) Coordinate Data Source Code: The... physical piece of or a closely related set of equipment. The EPA's reporting format for a given inventory...

  20. Redundant Coding in Visual Search Displays: Effects of Shape and Colour.

    DTIC Science & Technology

    1997-02-01

    results for refining color selection algorithms and for color coding in situations where the gamut of available colors is limited. In a secondary set of analyses, we note large performance differences as a function of target shape.

  1. Nine-year-old children use norm-based coding to visually represent facial expression.

    PubMed

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. Optimization of radiation shielding material aiming at compactness, lightweight, and low activation for a vehicle-mounted accelerator-driven D-T neutron source.

    PubMed

    Cai, Yao; Hu, Huasi; Lu, Shuangying; Jia, Qinggang

    2018-05-01

    To minimize the size and weight of a vehicle-mounted accelerator-driven D-T neutron source and protect workers from unnecessary irradiation after the equipment shutdown, a method to optimize radiation shielding material aiming at compactness, lightweight, and low activation for the fast neutrons was developed. The method employed genetic algorithm, combining MCNP and ORIGEN codes. A series of composite shielding material samples were obtained by the method step by step. The volume and weight needed to build a shield (assumed as a coaxial tapered cylinder) were adopted to compare the performance of the materials visually and conveniently. The results showed that the optimized materials have excellent performance in comparison with the conventional materials. The "MCNP6-ACT" method and the "rigorous two steps" (R2S) method were used to verify the activation grade of the shield irradiated by D-T neutrons. The types of radionuclide, the energy spectrum of corresponding decay gamma source, and the variation in decay gamma dose rate were also computed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. The Astrophysics Source Code Library by the numbers

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.

  4. Astrophysics Source Code Library: Incite to Cite!

    NASA Astrophysics Data System (ADS)

    DuPrie, K.; Allen, A.; Berriman, B.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P.; Wallen, J. F.

    2014-05-01

    The Astrophysics Source Code Library (ASCl,http://ascl.net/) is an on-line registry of over 700 source codes that are of interest to astrophysicists, with more being added regularly. The ASCL actively seeks out codes as well as accepting submissions from the code authors, and all entries are citable and indexed by ADS. All codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. In addition to being the largest directory of scientist-written astrophysics programs available, the ASCL is also an active participant in the reproducible research movement with presentations at various conferences, numerous blog posts and a journal article. This poster provides a description of the ASCL and the changes that we are starting to see in the astrophysics community as a result of the work we are doing.

  5. Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; DuPrie, K.; Berriman, B.; Hanisch, R. J.; Mink, J.; Teuben, P. J.

    2013-10-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.

  6. Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Theis, C.; Buchegger, K. H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.

    2006-06-01

    The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems.

  7. Code-modulated visual evoked potentials using fast stimulus presentation and spatiotemporal beamformer decoding.

    PubMed

    Wittevrongel, Benjamin; Van Wolputte, Elia; Van Hulle, Marc M

    2017-11-08

    When encoding visual targets using various lagged versions of a pseudorandom binary sequence of luminance changes, the EEG signal recorded over the viewer's occipital pole exhibits so-called code-modulated visual evoked potentials (cVEPs), the phase lags of which can be tied to these targets. The cVEP paradigm has enjoyed interest in the brain-computer interfacing (BCI) community for the reported high information transfer rates (ITR, in bits/min). In this study, we introduce a novel decoding algorithm based on spatiotemporal beamforming, and show that this algorithm is able to accurately identify the gazed target. Especially for a small number of repetitions of the coding sequence, our beamforming approach significantly outperforms an optimised support vector machine (SVM)-based classifier, which is considered state-of-the-art in cVEP-based BCI. In addition to the traditional 60 Hz stimulus presentation rate for the coding sequence, we also explore the 120 Hz rate, and show that the latter enables faster communication, with a maximal median ITR of 172.87 bits/min. Finally, we also report on a transition effect in the EEG signal following the onset of the stimulus sequence, and recommend to exclude the first 150 ms of the trials from decoding when relying on a single presentation of the stimulus sequence.

  8. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  9. Predictions of the spontaneous symmetry-breaking theory for visual code completeness and spatial scaling in single-cell learning rules.

    PubMed

    Webber, C J

    2001-05-01

    This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.

  10. qtcm 0.1.2: A Python Implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation model

    NASA Astrophysics Data System (ADS)

    Lin, J. W.-B.

    2008-10-01

    Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.

  11. qtcm 0.1.2: a Python implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation Model

    NASA Astrophysics Data System (ADS)

    Lin, J. W.-B.

    2009-02-01

    Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.

  12. A Python Implementation of an Intermediate-Level Tropical Circulation Model and Implications for How Modeling Science is Done

    NASA Astrophysics Data System (ADS)

    Lin, J. W. B.

    2015-12-01

    Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiledlanguages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionalityavailable with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Pythonto create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran, to optimize model performance but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.

  13. Voxel2MCNP: a framework for modeling, simulation and evaluation of radiation transport scenarios for Monte Carlo codes.

    PubMed

    Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

    2013-08-21

    The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.

  14. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  15. Ink-constrained halftoning with application to QR codes

    NASA Astrophysics Data System (ADS)

    Bayeh, Marzieh; Compaan, Erin; Lindsey, Theodore; Orlow, Nathan; Melczer, Stephen; Voller, Zachary

    2014-01-01

    This paper examines adding visually significant, human recognizable data into QR codes without affecting their machine readability by utilizing known methods in image processing. Each module of a given QR code is broken down into pixels, which are halftoned in such a way as to keep the QR code structure while revealing aspects of the secondary image to the human eye. The loss of information associated to this procedure is discussed, and entropy values are calculated for examples given in the paper. Numerous examples of QR codes with embedded images are included.

  16. Distributed Joint Source-Channel Coding in Wireless Sensor Networks

    PubMed Central

    Zhu, Xuqi; Liu, Yu; Zhang, Lin

    2009-01-01

    Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560

  17. State-Chart Autocoder

    NASA Technical Reports Server (NTRS)

    Clark, Kenneth; Watney, Garth; Murray, Alexander; Benowitz, Edward

    2007-01-01

    A computer program translates Unified Modeling Language (UML) representations of state charts into source code in the C, C++, and Python computing languages. ( State charts signifies graphical descriptions of states and state transitions of a spacecraft or other complex system.) The UML representations constituting the input to this program are generated by using a UML-compliant graphical design program to draw the state charts. The generated source code is consistent with the "quantum programming" approach, which is so named because it involves discrete states and state transitions that have features in common with states and state transitions in quantum mechanics. Quantum programming enables efficient implementation of state charts, suitable for real-time embedded flight software. In addition to source code, the autocoder program generates a graphical-user-interface (GUI) program that, in turn, generates a display of state transitions in response to events triggered by the user. The GUI program is wrapped around, and can be used to exercise the state-chart behavior of, the generated source code. Once the expected state-chart behavior is confirmed, the generated source code can be augmented with a software interface to the rest of the software with which the source code is required to interact.

  18. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  19. Practices in source code sharing in astrophysics

    NASA Astrophysics Data System (ADS)

    Shamir, Lior; Wallin, John F.; Allen, Alice; Berriman, Bruce; Teuben, Peter; Nemiroff, Robert J.; Mink, Jessica; Hanisch, Robert J.; DuPrie, Kimberly

    2013-02-01

    While software and algorithms have become increasingly important in astronomy, the majority of authors who publish computational astronomy research do not share the source code they develop, making it difficult to replicate and reuse the work. In this paper we discuss the importance of sharing scientific source code with the entire astrophysics community, and propose that journals require authors to make their code publicly available when a paper is published. That is, we suggest that a paper that involves a computer program not be accepted for publication unless the source code becomes publicly available. The adoption of such a policy by editors, editorial boards, and reviewers will improve the ability to replicate scientific results, and will also make computational astronomy methods more available to other researchers who wish to apply them to their data.

  20. Surfing a spike wave down the ventral stream.

    PubMed

    VanRullen, Rufin; Thorpe, Simon J

    2002-10-01

    Numerous theories of neural processing, often motivated by experimental observations, have explored the computational properties of neural codes based on the absolute or relative timing of spikes in spike trains. Spiking neuron models and theories however, as well as their experimental counterparts, have generally been limited to the simulation or observation of isolated neurons, isolated spike trains, or reduced neural populations. Such theories would therefore seem inappropriate to capture the properties of a neural code relying on temporal spike patterns distributed across large neuronal populations. Here we report a range of computer simulations and theoretical considerations that were designed to explore the possibilities of one such code and its relevance for visual processing. In a unified framework where the relation between stimulus saliency and spike relative timing plays the central role, we describe how the ventral stream of the visual system could process natural input scenes and extract meaningful information, both rapidly and reliably. The first wave of spikes generated in the retina in response to a visual stimulation carries information explicitly in its spatio-temporal structure: the most salient information is represented by the first spikes over the population. This spike wave, propagating through a hierarchy of visual areas, is regenerated at each processing stage, where its temporal structure can be modified by (i). the selectivity of the cortical neurons, (ii). lateral interactions and (iii). top-down attentional influences from higher order cortical areas. The resulting model could account for the remarkable efficiency and rapidity of processing observed in the primate visual system.

Top