A Unified Air-Sea Visualization System: Survey on Gridding Structures
NASA Technical Reports Server (NTRS)
Anand, Harsh; Moorhead, Robert
1995-01-01
The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.
NASA Astrophysics Data System (ADS)
Inoue, Y.; Tsuruoka, K.; Arikawa, M.
2014-04-01
In this paper, we proposed a user interface that displays visual animations on geographic maps and timelines for depicting historical stories by representing causal relationships among events for time series. We have been developing an experimental software system for the spatial-temporal visualization of historical stories for tablet computers. Our proposed system makes people effectively learn historical stories using visual animations based on hierarchical structures of different scale timelines and maps.
Adaptation of facial synthesis to parameter analysis in MPEG-4 visual communication
NASA Astrophysics Data System (ADS)
Yu, Lu; Zhang, Jingyu; Liu, Yunhai
2000-12-01
In MPEG-4, Facial Definition Parameters (FDPs) and Facial Animation Parameters (FAPs) are defined to animate 1 a facial object. Most of the previous facial animation reconstruction systems were focused on synthesizing animation from manually or automatically generated FAPs but not the FAPs extracted from natural video scene. In this paper, an analysis-synthesis MPEG-4 visual communication system is established, in which facial animation is reconstructed from FAPs extracted from natural video scene.
Solar System Visualization (SSV) Project
NASA Technical Reports Server (NTRS)
Todd, Jessida L.
2005-01-01
The Solar System Visualization (SSV) project aims at enhancing scientific and public understanding through visual representations and modeling procedures. The SSV project's objectives are to (1) create new visualization technologies, (2) organize science observations and models, and (3) visualize science results and mission Plans. The SSV project currently supports the Mars Exploration Rovers (MER) mission, the Mars Reconnaissance Orbiter (MRO), and Cassini. In support of the these missions, the SSV team has produced pan and zoom animations of large mosaics to reveal details of surface features and topography, created 3D animations of science instruments and procedures, formed 3-D anaglyphs from left and right stereo pairs, and animated registered multi-resolution mosaics to provide context for microscopic images.
Data Visualization and Animation Lab (DVAL) overview
NASA Technical Reports Server (NTRS)
Stacy, Kathy; Vonofenheim, Bill
1994-01-01
The general capabilities of the Langley Research Center Data Visualization and Animation Laboratory is described. These capabilities include digital image processing, 3-D interactive computer graphics, data visualization and analysis, video-rate acquisition and processing of video images, photo-realistic modeling and animation, video report generation, and color hardcopies. A specialized video image processing system is also discussed.
A multi-criteria approach to camera motion design for volume data animation.
Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.
Simple and powerful visual stimulus generator.
Kremlácek, J; Kuba, M; Kubová, Z; Vít, F
1999-02-01
We describe a cheap, simple, portable and efficient approach to visual stimulation for neurophysiology which does not need any special hardware equipment. The method based on an animation technique uses the FLI autodesk animator format. This form of the animation is replayed by a special program ('player') providing synchronisation pulses toward recording system via parallel port. The 'player is running on an IBM compatible personal computer under MS-DOS operation system and stimulus is displayed on a VGA computer monitor. Various stimuli created with this technique for visual evoked potentials (VEPs) are presented.
Kawai, Nobuyuki; He, Hongshen
2016-01-01
Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions.
He, Hongshen
2016-01-01
Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions. PMID:27783686
Multimedia Visualizer: An Animated, Object-Based OPAC.
ERIC Educational Resources Information Center
Lee, Newton S.
1991-01-01
Describes the Multimedia Visualizer, an online public access catalog (OPAC) that uses animated visualizations to make it more user friendly. Pictures of the system are shown that illustrate the interactive objects that patrons can access, including card catalog drawers, librarian desks, and bookshelves; and access to multimedia items is described.…
NASA Astrophysics Data System (ADS)
Chen, ChuXin; Trivedi, Mohan M.
1992-03-01
This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
NASA Technical Reports Server (NTRS)
Brown, Alison M.
2005-01-01
Solar System Visualization products enable scientists to compare models and measurements in new ways that enhance the scientific discovery process, enhance the information content and understanding of the science results for both science colleagues and the public, and create.visually appealing and intellectually stimulating visualization products. Missions supported include MER, MRO, and Cassini. Image products produced include pan and zoom animations of large mosaics to reveal the details of surface features and topography, animations into registered multi-resolution mosaics to provide context for microscopic images, 3D anaglyphs from left and right stereo pairs, and screen captures from video footage. Specific products include a three-part context animation of the Cassini Enceladus encounter highlighting images from 350 to 4 meter per pixel resolution; Mars Reconnaissance Orbiter screen captures illustrating various instruments during assembly and testing at the Payload Hazardous Servicing Facility at Kennedy Space Center; and an animation of Mars Exploration Rover Opportunity's 'Rub al Khali' panorama where the rover was stuck in the deep fine sand for more than a month. This task creates new visualization products that enable new science results and enhance the public's understanding of the Solar System and NASA's missions of exploration.
Automated Visual Cognitive Tasks for Recording Neural Activity Using a Floor Projection Maze
Kent, Brendon W.; Yang, Fang-Chi; Burwell, Rebecca D.
2014-01-01
Neuropsychological tasks used in primates to investigate mechanisms of learning and memory are typically visually guided cognitive tasks. We have developed visual cognitive tasks for rats using the Floor Projection Maze1,2 that are optimized for visual abilities of rats permitting stronger comparisons of experimental findings with other species. In order to investigate neural correlates of learning and memory, we have integrated electrophysiological recordings into fully automated cognitive tasks on the Floor Projection Maze1,2. Behavioral software interfaced with an animal tracking system allows monitoring of the animal's behavior with precise control of image presentation and reward contingencies for better trained animals. Integration with an in vivo electrophysiological recording system enables examination of behavioral correlates of neural activity at selected epochs of a given cognitive task. We describe protocols for a model system that combines automated visual presentation of information to rodents and intracranial reward with electrophysiological approaches. Our model system offers a sophisticated set of tools as a framework for other cognitive tasks to better isolate and identify specific mechanisms contributing to particular cognitive processes. PMID:24638057
Simulation and animation of sensor-driven robots.
Chen, C; Trivedi, M M; Bidlack, C R
1994-10-01
Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aid the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the users visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.
Jane Kapler Smith; Donald E. Zimmerman; Carol Akerelrea; Garrett O' Keefe
2008-01-01
Natural resource managers use a variety of computer-mediated presentation methods to communicate management practices to the public. We explored the effects of using the Stand Visualization System to visualize and animate predictions from the Forest Vegetation Simulator-Fire and Fuels Extension in presentations explaining forest succession (forest growth and change...
Animation of multi-flexible body systems and its use in control system design
NASA Technical Reports Server (NTRS)
Juengst, Carl; Stahlberg, Ron
1993-01-01
Animation can greatly assist the structural dynamicist and control system analyst with better understanding of how multi-flexible body systems behave. For multi-flexible body systems, the structural characteristics (mode frequencies, mode shapes, and damping) change, sometimes dramatically with large angles of rotation between bodies. With computer animation, the analyst can visualize these changes and how the system responds to active control forces and torques. A characterization of the type of system we wish to animate is presented. The lack of clear understanding of the above effects was a key element leading to the development of a multi-flexible body animation software package. The resulting animation software is described in some detail here, followed by its application to the control system analyst. Other applications of this software can be determined on an individual need basis. A number of software products are currently available that make the high-speed rendering of rigid body mechanical system simulation possible. However, such options are not available for use in rendering flexible body mechanical system simulations. The desire for a high-speed flexible body visualization tool led to the development of the Flexible Or Rigid Mechanical System (FORMS) software. This software was developed at the Center for Simulation and Design Optimization of Mechanical Systems at the University of Iowa. FORMS provides interactive high-speed rendering of flexible and/or rigid body mechanical system simulations, and combines geometry and motion information to produce animated output. FORMS is designed to be both portable and flexible, and supports a number of different user interfaces and graphical display devices. Additional features have been added to FORMS that allow special visualization results related to the nature of the flexible body geometric representations.
NASA Astrophysics Data System (ADS)
Whitford, Dennis J.
2002-05-01
This paper, the second of a two-part series, introduces undergraduate students to ocean wave forecasting using interactive computer-generated visualization and animation. Verbal descriptions and two-dimensional illustrations are often insufficient for student comprehension. Fortunately, the introduction of computers in the geosciences provides a tool for addressing this problem. Computer-generated visualization and animation, accompanied by oral explanation, have been shown to be a pedagogical improvement to more traditional methods of instruction. Cartographic science and other disciplines using geographical information systems have been especially aggressive in pioneering the use of visualization and animation, whereas oceanography has not. This paper will focus on the teaching of ocean swell wave forecasting, often considered a difficult oceanographic topic due to the mathematics and physics required, as well as its interdependence on time and space. Several MATLAB ® software programs are described and offered to visualize and animate group speed, frequency dispersion, angular dispersion, propagation, and wave height forecasting of deep water ocean swell waves. Teachers may use these interactive visualizations and animations without requiring an extensive background in computer programming.
RATT: RFID Assisted Tracking Tile. Preliminary results.
Quinones, Dario R; Cuevas, Aaron; Cambra, Javier; Canals, Santiago; Moratal, David
2017-07-01
Behavior is one of the most important aspects of animal life. This behavior depends on the link between animals, their nervous systems and their environment. In order to study the behavior of laboratory animals several tools are needed, but a tracking tool is essential to perform a thorough behavioral study. Currently, several visual tracking tools are available. However, they have some drawbacks. For instance, when an animal is inside a cave, or is close to other animals, the tracking cameras cannot always detect the location or movement of this animal. This paper presents RFID Assisted Tracking Tile (RATT), a tracking system based on passive Radio Frequency Identification (RFID) technology in high frequency band according to ISO/IEC 15693. The RATT system is composed of electronic tiles that have nine active RFID antennas attached; in addition, it contains several overlapping passive coils to improve the magnetic field characteristics. Using several tiles, a large surface can be built on which the animals can move, allowing identification and tracking of their movements. This system, that could also be combined with a visual tracking system, paves the way for complete behavioral studies.
Attention Guidance in Learning from a Complex Animation: Seeing Is Understanding?
ERIC Educational Resources Information Center
de Koning, Bjorn B.; Tabbers, Huib K.; Rikers, Remy M. J. P.; Paas, Fred
2010-01-01
To examine how visual attentional resources are allocated when learning from a complex animation about the cardiovascular system, eye movements were registered in the absence and presence of visual cues. Cognitive processing was assessed using cued retrospective reporting, whereas comprehension and transfer tests measured the quality of the…
Simulation and animation of sensor-driven robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, C.; Trivedi, M.M.; Bidlack, C.R.
1994-10-01
Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aide the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the usersmore » visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.« less
Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul
2016-02-01
The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.
Single unit approaches to human vision and memory.
Kreiman, Gabriel
2007-08-01
Research on the visual system focuses on using electrophysiology, pharmacology and other invasive tools in animal models. Non-invasive tools such as scalp electroencephalography and imaging allow examining humans but show a much lower spatial and/or temporal resolution. Under special clinical conditions, it is possible to monitor single-unit activity in humans when invasive procedures are required due to particular pathological conditions including epilepsy and Parkinson's disease. We review our knowledge about the visual system and visual memories in the human brain at the single neuron level. The properties of the human brain seem to be broadly compatible with the knowledge derived from animal models. The possibility of examining high-resolution brain activity in conscious human subjects allows investigators to ask novel questions that are challenging to address in animal models.
Differential Visual Processing of Animal Images, with and without Conscious Awareness
Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A.; Melcher, David
2016-01-01
The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that “invisible” stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the “unseen” condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask. PMID:27790106
Differential Visual Processing of Animal Images, with and without Conscious Awareness.
Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A; Melcher, David
2016-01-01
The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that "invisible" stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the "unseen" condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask.
Expressive facial animation synthesis by learning speech coarticulation and expression spaces.
Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth
2006-01-01
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.
Visuomotor Transformation in the Fly Gaze Stabilization System
Huston, Stephen J; Krapp, Holger G
2008-01-01
For sensory signals to control an animal's behavior, they must first be transformed into a format appropriate for use by its motor systems. This fundamental problem is faced by all animals, including humans. Beyond simple reflexes, little is known about how such sensorimotor transformations take place. Here we describe how the outputs of a well-characterized population of fly visual interneurons, lobula plate tangential cells (LPTCs), are used by the animal's gaze-stabilizing neck motor system. The LPTCs respond to visual input arising from both self-rotations and translations of the fly. The neck motor system however is involved in gaze stabilization and thus mainly controls compensatory head rotations. We investigated how the neck motor system is able to selectively extract rotation information from the mixed responses of the LPTCs. We recorded extracellularly from fly neck motor neurons (NMNs) and mapped the directional preferences across their extended visual receptive fields. Our results suggest that—like the tangential cells—NMNs are tuned to panoramic retinal image shifts, or optic flow fields, which occur when the fly rotates about particular body axes. In many cases, tangential cells and motor neurons appear to be tuned to similar axes of rotation, resulting in a correlation between the coordinate systems the two neural populations employ. However, in contrast to the primarily monocular receptive fields of the tangential cells, most NMNs are sensitive to visual motion presented to either eye. This results in the NMNs being more selective for rotation than the LPTCs. Thus, the neck motor system increases its rotation selectivity by a comparatively simple mechanism: the integration of binocular visual motion information. PMID:18651791
Effect of experimental glaucoma on the non-image forming visual system.
de Zavalía, Nuria; Plano, Santiago A; Fernandez, Diego C; Lanzani, María Florencia; Salido, Ezequiel; Belforte, Nicolás; Sarmiento, María I Keller; Golombek, Diego A; Rosenstein, Ruth E
2011-06-01
Glaucoma is a leading cause of blindness worldwide, characterized by retinal ganglion cell degeneration and damage to the optic nerve. We investigated the non-image forming visual system in an experimental model of glaucoma in rats induced by weekly injections of chondroitin sulphate (CS) in the eye anterior chamber. Animals were unilaterally or bilaterally injected with CS or vehicle for 6 or 10 weeks. In the retinas from eyes injected with CS, a similar decrease in melanopsin and Thy-1 levels was observed. CS injections induced a similar decrease in the number of melanopsin-containing cells and superior collicular retinal ganglion cells. Experimental glaucoma induced a significant decrease in the afferent pupil light reflex. White light significantly decreased nocturnal pineal melatonin content in control and glaucomatous animals, whereas blue light decreased this parameter in vehicle- but not in CS-injected animals. A significant decrease in light-induced c-Fos expression in the suprachiasmatic nuclei was observed in glaucomatous animals. General rhythmicity and gross entrainment appear to be conserved, but glaucomatous animals exhibited a delayed phase angle with respect to lights off and a significant increase in the percentage of diurnal activity. These results indicate the glaucoma induced significant alterations in the non-image forming visual system. © 2011 The Authors. Journal of Neurochemistry © 2011 International Society for Neurochemistry.
Takalo, Jouni; Piironen, Arto; Honkanen, Anna; Lempeä, Mikko; Aikio, Mika; Tuukkanen, Tuomas; Vähäsöyrinki, Mikko
2012-01-01
Ideally, neuronal functions would be studied by performing experiments with unconstrained animals whilst they behave in their natural environment. Although this is not feasible currently for most animal models, one can mimic the natural environment in the laboratory by using a virtual reality (VR) environment. Here we present a novel VR system based upon a spherical projection of computer generated images using a modified commercial data projector with an add-on fish-eye lens. This system provides equidistant visual stimulation with extensive coverage of the visual field, high spatio-temporal resolution and flexible stimulus generation using a standard computer. It also includes a track-ball system for closed-loop behavioural experiments with walking animals. We present a detailed description of the system and characterize it thoroughly. Finally, we demonstrate the VR system's performance whilst operating in closed-loop conditions by showing the movement trajectories of the cockroaches during exploratory behaviour in a VR forest.
A Core Knowledge Architecture of Visual Working Memory
ERIC Educational Resources Information Center
Wood, Justin N.
2011-01-01
Visual working memory (VWM) is widely thought to contain specialized buffers for retaining spatial and object information: a "spatial-object architecture." However, studies of adults, infants, and nonhuman animals show that visual cognition builds on core knowledge systems that retain more specialized representations: (1) spatiotemporal…
Octopus vulgaris uses visual information to determine the location of its arm.
Gutnick, Tamar; Byrne, Ruth A; Hochner, Binyamin; Kuba, Michael
2011-03-22
Octopuses are intelligent, soft-bodied animals with keen senses that perform reliably in a variety of visual and tactile learning tasks. However, researchers have found them disappointing in that they consistently fail in operant tasks that require them to combine central nervous system reward information with visual and peripheral knowledge of the location of their arms. Wells claimed that in order to filter and integrate an abundance of multisensory inputs that might inform the animal of the position of a single arm, octopuses would need an exceptional computing mechanism, and "There is no evidence that such a system exists in Octopus, or in any other soft bodied animal." Recent electrophysiological experiments, which found no clear somatotopic organization in the higher motor centers, support this claim. We developed a three-choice maze that required an octopus to use a single arm to reach a visually marked goal compartment. Using this operant task, we show for the first time that Octopus vulgaris is capable of guiding a single arm in a complex movement to a location. Thus, we claim that octopuses can combine peripheral arm location information with visual input to control goal-directed complex movements. Copyright © 2011 Elsevier Ltd. All rights reserved.
A conditioned visual orientation requires the ellipsoid body in Drosophila
Guo, Chao; Du, Yifei; Yuan, Deliang; Li, Meixia; Gong, Haiyun; Gong, Zhefeng
2015-01-01
Orientation, the spatial organization of animal behavior, is an essential faculty of animals. Bacteria and lower animals such as insects exhibit taxis, innate orientation behavior, directly toward or away from a directional cue. Organisms can also orient themselves at a specific angle relative to the cues. In this study, using Drosophila as a model system, we established a visual orientation conditioning paradigm based on a flight simulator in which a stationary flying fly could control the rotation of a visual object. By coupling aversive heat shocks to a fly's orientation toward one side of the visual object, we found that the fly could be conditioned to orientate toward the left or right side of the frontal visual object and retain this conditioned visual orientation. The lower and upper visual fields have different roles in conditioned visual orientation. Transfer experiments showed that conditioned visual orientation could generalize between visual targets of different sizes, compactness, or vertical positions, but not of contour orientation. Rut—Type I adenylyl cyclase and Dnc—phosphodiesterase were dispensable for visual orientation conditioning. Normal activity and scb signaling in R3/R4d neurons of the ellipsoid body were required for visual orientation conditioning. Our studies established a visual orientation conditioning paradigm and examined the behavioral properties and neural circuitry of visual orientation, an important component of the insect's spatial navigation. PMID:25512578
A high-quality high-fidelity visualization of the September 11 attack on the World Trade Center.
Rosen, Paul; Popescu, Voicu; Hoffmann, Christoph; Irfanoglu, Ayhan
2008-01-01
In this application paper, we describe the efforts of a multidisciplinary team towards producing a visualization of the September 11 Attack on the North Tower of New York's World Trade Center. The visualization was designed to meet two requirements. First, the visualization had to depict the impact with high fidelity, by closely following the laws of physics. Second, the visualization had to be eloquent to a nonexpert user. This was achieved by first designing and computing a finite-element analysis (FEA) simulation of the impact between the aircraft and the top 20 stories of the building, and then by visualizing the FEA results with a state-of-the-art commercial animation system. The visualization was enabled by an automatic translator that converts the simulation data into an animation system 3D scene. We built upon a previously developed translator. The translator was substantially extended to enable and control visualization of fire and of disintegrating elements, to better scale with the number of nodes and number of states, to handle beam elements with complex profiles, and to handle smoothed particle hydrodynamics liquid representation. The resulting translator is a powerful automatic and scalable tool for high-quality visualization of FEA results.
Sharkey, Camilla R; Fujimoto, M Stanley; Lord, Nathan P; Shin, Seunggwan; McKenna, Duane D; Suvorov, Anton; Martin, Gavin J; Bybee, Seth M
2017-01-31
Opsin proteins are fundamental components of animal vision whose structure largely determines the sensitivity of visual pigments to different wavelengths of light. Surprisingly little is known about opsin evolution in beetles, even though they are the most species rich animal group on Earth and exhibit considerable variation in visual system sensitivities. We reveal the patterns of opsin evolution across 62 beetle species and relatives. Our results show that the major insect opsin class (SW) that typically confers sensitivity to "blue" wavelengths was lost ~300 million years ago, before the origin of modern beetles. We propose that UV and LW opsin gene duplications have restored the potential for trichromacy (three separate channels for colour vision) in beetles up to 12 times and more specifically, duplications within the UV opsin class have likely led to the restoration of "blue" sensitivity up to 10 times. This finding reveals unexpected plasticity within the insect visual system and highlights its remarkable ability to evolve and adapt to the available light and visual cues present in the environment.
Klaver, Peter; Latal, Beatrice; Martin, Ernst
2015-01-01
Very low birth weight (VLBW) premature born infants have a high risk to develop visual perceptual and learning deficits as well as widespread functional and structural brain abnormalities during infancy and childhood. Whether and how prematurity alters neural specialization within visual neural networks is still unknown. We used functional and structural brain imaging to examine the visual semantic system of VLBW born (<1250 g, gestational age 25-32 weeks) adolescents (13-15 years, n = 11, 3 males) and matched term born control participants (13-15 years, n = 11, 3 males). Neurocognitive assessment revealed no group differences except for lower scores on an adaptive visuomotor integration test. All adolescents were scanned while viewing pictures of animals and tools and scrambled versions of these pictures. Both groups demonstrated animal and tool category related neural networks. Term born adolescents showed tool category related neural activity, i.e. tool pictures elicited more activity than animal pictures, in temporal and parietal brain areas. Animal category related activity was found in the occipital, temporal and frontal cortex. VLBW born adolescents showed reduced tool category related activity in the dorsal visual stream compared with controls, specifically the left anterior intraparietal sulcus, and enhanced animal category related activity in the left middle occipital gyrus and right lingual gyrus. Lower birth weight of VLBW adolescents correlated with larger thickness of the pericalcarine gyrus in the occipital cortex and smaller surface area of the superior temporal gyrus in the lateral temporal cortex. Moreover, larger thickness of the pericalcarine gyrus and smaller surface area of the superior temporal gyrus correlated with reduced tool category related activity in the parietal cortex. Together, our data suggest that very low birth weight predicts alterations of higher order visual semantic networks, particularly in the dorsal stream. The differences in neural specialization may be associated with aberrant cortical development of areas in the visual system that develop early in childhood. Copyright © 2014 Elsevier Ltd. All rights reserved.
Spectral discrimination in color blind animals via chromatic aberration and pupil shape.
Stubbs, Alexander L; Stubbs, Christopher W
2016-07-19
We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide "color-blind" animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. We quantitatively show, through numerical simulations, how chromatic aberration can be exploited to obtain spectral information, especially through nonaxial pupils that are characteristic of coleoid cephalopods. We have also assessed the inherent ambiguity between range and color that is a consequence of the chromatic variation of best focus with wavelength. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins.
NASA Astrophysics Data System (ADS)
Jian, Yifan; Xu, Jing; Zawadzki, Robert J.; Sarunic, Marinko V.
2013-03-01
Small animal models of human retinal diseases are a critical component of vision research. In this report, we present an ultrahigh-resolution ultrahigh-speed adaptive optics optical coherence tomography (AO-OCT) system for small animal retinal imaging (mouse, fish, etc.). We adapted our imaging system to different types of small animals in accordance with the optical properties of their eyes. Results of AO-OCT images of small animal retinas acquired with AO correction are presented. Cellular structures including nerve fiber bundles, capillary networks and detailed double-cone photoreceptors are visualized.
Tusa, R J; Mustari, M J; Burrows, A F; Fuchs, A F
2001-08-01
The normal development and the capacity to calibrate gaze-stabilizing systems may depend on normal vision during infancy. At the end of 1 yr of dark rearing, cats have gaze-stabilizing deficits similar to that of the newborn human infant including decreased monocular optokinetic nystagmus (OKN) in the nasal to temporal (N-T) direction and decreased velocity storage in the vestibuloocular reflex (VOR). The purpose of this study is to determine to what extent restricted vision during the first 2 mo of life in monkeys affects the development of gaze-stabilizing systems. The eyelids of both eyes were sutured closed in three rhesus monkeys (Macaca mulatta) at birth. Eyelids were opened at 25 days in one monkey and 40 and 55 days in the other two animals. Eye movements were recorded from each eye using scleral search coils. The VOR, OKN, and fixation were examined at 6 and 12 mo of age. We also examined ocular alignment, refraction, and visual acuity in these animals. At 1 yr of age, visual acuity ranged from 0.3 to 0.6 LogMAR (20/40-20/80). All animals showed a defect in monocular OKN in the N-T direction. The velocity-storage component of OKN (i.e., OKAN) was the most impaired. All animals had a mild reduction in VOR gain but had a normal time constant. The animals deprived for 40 and 55 days had a persistent strabismus. All animals showed a nystagmus similar to latent nystagmus (LN) in human subjects. The amount of LN and OKN defect correlated positively with the duration of deprivation. In addition, the animal deprived for 55 days demonstrated a pattern of nystagmus similar to congenital nystagmus in human subjects. We found that restricted visual input during the first 2 mo of life impairs certain gaze-stabilizing systems and causes LN in primates.
McBride, Sebastian D; Perentos, Nicholas; Morton, A Jennifer
2016-05-30
For reasons of cost and ethical concerns, models of neurodegenerative disorders such as Huntington disease (HD) are currently being developed in farm animals, as an alternative to non-human primates. Developing reliable methods of testing cognitive function is essential to determining the usefulness of such models. Nevertheless, cognitive testing of farm animal species presents a unique set of challenges. The primary aims of this study were to develop and validate a mobile operant system suitable for high throughput cognitive testing of sheep. We designed a semi-automated testing system with the capability of presenting stimuli (visual, auditory) and reward at six spatial locations. Fourteen normal sheep were used to validate the system using a two-choice visual discrimination task. Four stages of training devised to acclimatise animals to the system are also presented. All sheep progressed rapidly through the training stages, over eight sessions. All sheep learned the 2CVDT and performed at least one reversal stage. The mean number of trials the sheep took to reach criterion in the first acquisition learning was 13.9±1.5 and for the reversal learning was 19.1±1.8. This is the first mobile semi-automated operant system developed for testing cognitive function in sheep. We have designed and validated an automated operant behavioural testing system suitable for high throughput cognitive testing in sheep and other medium-sized quadrupeds, such as pigs and dogs. Sheep performance in the two-choice visual discrimination task was very similar to that reported for non-human primates and strongly supports the use of farm animals as pre-clinical models for the study of neurodegenerative diseases. Copyright © 2015 Elsevier B.V. All rights reserved.
Cancer-disease associations: A visualization and animation through medical big data.
Iqbal, Usman; Hsu, Chun-Kung; Nguyen, Phung Anh Alex; Clinciu, Daniel Livius; Lu, Richard; Syed-Abdul, Shabbir; Yang, Hsuan-Chia; Wang, Yao-Chin; Huang, Chu-Ya; Huang, Chih-Wei; Chang, Yo-Cheng; Hsu, Min-Huei; Jian, Wen-Shan; Li, Yu-Chuan Jack
2016-04-01
Cancer is the primary disease responsible for death and disability worldwide. Currently, prevention and early detection represents the best hope for cure. Knowing the expected diseases that occur with a particular cancer in advance could lead to physicians being able to better tailor their treatment for cancer. The aim of this study was to build an animated visualization tool called as Cancer Associations Map Animation (CAMA), to chart the association of cancers with other disease over time. The study population was collected from the Taiwan National Health Insurance Database during the period January 2000 to December 2002, 782 million outpatient visits were used to compute the associations of nine major cancers with other diseases. A motion chart was used to quantify and visualize the associations between diseases and cancers. The CAMA motion chart that was built successfully facilitated the observation of cancer-disease associations across ages and genders. The CAMA system can be accessed online at http://203.71.86.98/web/runq16.html. The CAMA animation system is an animated medical data visualization tool which provides a dynamic, time-lapse, animated view of cancer-disease associations across different age groups and gender. Derived from a large, nationwide healthcare dataset, this exploratory data analysis tool can detect cancer comorbidities earlier than is possible by manual inspection. Taking into account the trajectory of cancer-specific comorbidity development may facilitate clinicians and healthcare researchers to more efficiently explore early stage hypotheses, develop new cancer treatment approaches, and identify potential effect modifiers or new risk factors associated with specific cancers. Copyright © 2016. Published by Elsevier Ireland Ltd.
A 2D virtual reality system for visual goal-driven navigation in zebrafish larvae
Jouary, Adrien; Haudrechy, Mathieu; Candelier, Raphaël; Sumbre, German
2016-01-01
Animals continuously rely on sensory feedback to adjust motor commands. In order to study the role of visual feedback in goal-driven navigation, we developed a 2D visual virtual reality system for zebrafish larvae. The visual feedback can be set to be similar to what the animal experiences in natural conditions. Alternatively, modification of the visual feedback can be used to study how the brain adapts to perturbations. For this purpose, we first generated a library of free-swimming behaviors from which we learned the relationship between the trajectory of the larva and the shape of its tail. Then, we used this technique to infer the intended displacements of head-fixed larvae, and updated the visual environment accordingly. Under these conditions, larvae were capable of aligning and swimming in the direction of a whole-field moving stimulus and produced the fine changes in orientation and position required to capture virtual prey. We demonstrate the sensitivity of larvae to visual feedback by updating the visual world in real-time or only at the end of the discrete swimming episodes. This visual feedback perturbation caused impaired performance of prey-capture behavior, suggesting that larvae rely on continuous visual feedback during swimming. PMID:27659496
A rodent model for the study of invariant visual object recognition
Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.
2009-01-01
The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704
The Habitable Zone Gallery 2.0: The Online Exoplanet System Visualization Suite
NASA Astrophysics Data System (ADS)
Chandler, C. O.; Kane, S. R.; Gelino, D. M.
2017-11-01
The Habitable Zone Gallery 2.0 provides new and improved visualization and data analysis tools to the exoplanet habitability community and beyond. Modules include interactive habitable zone plotting and downloadable 3D animations.
Clark, R. C.; Brebner, J. S.
2017-01-01
Researchers must assess similarities and differences in colour from an animal's eye view when investigating hypotheses in ecology, evolution and behaviour. Nervous systems generate colour perceptions by comparing the responses of different spectral classes of photoreceptor through colour opponent mechanisms, and the performance of these mechanisms is limited by photoreceptor noise. Accordingly, the receptor noise limited (RNL) colour distance model of Vorobyev and Osorio (Vorobyev & Osorio 1998 Proc. R. Soc. Lond. B 265, 351–358 (doi:10.1098/rspb.1998.0302)) generates predictions about the discriminability of colours that agree with behavioural data, and consequently it has found wide application in studies of animal colour vision. Vorobyev and Osorio (1998) provide equations to calculate RNL colour distances for animals with di-, tri- and tetrachromatic vision, which is adequate for many species. However, researchers may sometimes wish to compute RNL colour distances for potentially more complex colour visual systems. Thus, we derive a simple, single formula for the computation of RNL distance between two measurements of colour, equivalent to the published di-, tri- and tetrachromatic equations of Vorobyev and Osorio (1998), and valid for colour visual systems with any number of types of noisy photoreceptors. This formula will allow the easy application of this important colour visual model across the fields of ecology, evolution and behaviour. PMID:28989773
ERIC Educational Resources Information Center
Mather, Richard
2015-01-01
This paper explores the application of canonical gradient analysis to evaluate and visualize student performance and acceptance of a learning system platform. The subject of evaluation is a first year BSc module for computer programming. This uses "Ceebot," an animated and immersive game-like development environment. Multivariate…
Janisse, Kevyn; Doucet, Stéphanie M.
2017-01-01
Perceptual models of animal vision have greatly contributed to our understanding of animal-animal and plant-animal communication. The receptor-noise model of color contrasts has been central to this research as it quantifies the difference between two colors for any visual system of interest. However, if the properties of the visual system are unknown, assumptions regarding parameter values must be made, generally with unknown consequences. In this study, we conduct a sensitivity analysis of the receptor-noise model using avian visual system parameters to systematically investigate the influence of variation in light environment, photoreceptor sensitivities, photoreceptor densities, and light transmission properties of the ocular media and the oil droplets. We calculated the chromatic contrast of 15 plumage patches to quantify a dichromatism score for 70 species of Galliformes, a group of birds that display a wide range of sexual dimorphism. We found that the photoreceptor densities and the wavelength of maximum sensitivity of the short-wavelength-sensitive photoreceptor 1 (SWS1) can change dichromatism scores by 50% to 100%. In contrast, the light environment, transmission properties of the oil droplets, transmission properties of the ocular media, and the peak sensitivities of the cone photoreceptors had a smaller impact on the scores. By investigating the effect of varying two or more parameters simultaneously, we further demonstrate that improper parameterization could lead to differences between calculated and actual contrasts of more than 650%. Our findings demonstrate that improper parameterization of tetrachromatic visual models can have very large effects on measures of dichromatism scores, potentially leading to erroneous inferences. We urge more complete characterization of avian retinal properties and recommend that researchers either determine whether their species of interest possess an ultraviolet or near-ultraviolet sensitive SWS1 photoreceptor, or present models for both. PMID:28076391
Relating Neuronal to Behavioral Performance: Variability of Optomotor Responses in the Blowfly
Rosner, Ronny; Warzecha, Anne-Kathrin
2011-01-01
Behavioral responses of an animal vary even when they are elicited by the same stimulus. This variability is due to stochastic processes within the nervous system and to the changing internal states of the animal. To what extent does the variability of neuronal responses account for the overall variability at the behavioral level? To address this question we evaluate the neuronal variability at the output stage of the blowfly's (Calliphora vicina) visual system by recording from motion-sensitive interneurons mediating head optomotor responses. By means of a simple modelling approach representing the sensory-motor transformation, we predict head movements on the basis of the recorded responses of motion-sensitive neurons and compare the variability of the predicted head movements with that of the observed ones. Large gain changes of optomotor head movements have previously been shown to go along with changes in the animals' activity state. Our modelling approach substantiates that these gain changes are imposed downstream of the motion-sensitive neurons of the visual system. Moreover, since predicted head movements are clearly more reliable than those actually observed, we conclude that substantial variability is introduced downstream of the visual system. PMID:22066014
Using video playbacks to study visual communication in a marine fish, Salaria pavo.
Gonçalves; Oliveira; Körner; Poschadel; Schlupp
2000-09-01
Video playbacks have been successfully applied to the study of visual communication in several groups of animals. However, this technique is controversial as video monitors are designed with the human visual system in mind. Differences between the visual capabilities of humans and other animals will lead to perceptually different interpretations of video images. We simultaneously presented males and females of the peacock blenny, Salaria pavo, with a live conspecific male and an online video image of the same individual. Video images failed to elicit appropriate responses. Males were aggressive towards the live male but not towards video images of the same male. Similarly, females courted only the live male and spent more time near this stimulus. In contrast, females of the gynogenetic poecilid Poecilia formosa showed an equal preference for a live and video image of a P. mexicana male, suggesting a response to live animals as strong as to video images. We discuss differences between the species that may explain their opposite reaction to video images. Copyright 2000 The Association for the Study of Animal Behaviour.
Lee, Wei-Chung Allen; Nedivi, Elly
2011-01-01
cpg15 is an activity-regulated gene that encodes a membrane-bound ligand that coordinately regulates growth of apposing dendritic and axonal arbors and the maturation of their synapses. These properties make it an attractive candidate for participating in plasticity of the mammalian visual system. Here we compare cpg15 expression during normal development of the rat visual system with that seen in response to dark rearing, monocular blockade of retinal action potentials, or monocular deprivation. Our results show that the onset of cpg15 expression in the visual cortex is coincident with eye opening, and it increases until the peak of the critical period at postnatal day 28 (P28). This early expression is independent of both retinal activity and visual experience. After P28, a component of cpg15 expression in the visual cortex, lateral geniculate nucleus (LGN), and superior colliculus (SC) develops a progressively stronger dependence on retinally driven action potentials. Dark rearing does not affect cpg15 mRNA expression in the LGN and SC at any age, but it does significantly affect its expression in the visual cortex from the peak of the critical period and into adulthood. In dark-reared rats, the peak level of cpg15 expression in the visual cortex at P28 is lower than in controls. Rather than showing the normal decline with maturation, these levels are maintained in dark-reared animals. We suggest that the prolonged plasticity in the visual cortex that is seen in dark-reared animals may result from failure to downregulate genes such as cpg15 that could promote structural remodeling and synaptic maturation. PMID:11880509
McDannold, Nathan; Arvanitis, Costas D; Vykhodtseva, Natalia; Livingstone, Margaret S
2012-07-15
The blood-brain barrier (BBB) prevents entry of most drugs into the brain and is a major hurdle to the use of drugs for brain tumors and other central nervous system disorders. Work in small animals has shown that ultrasound combined with an intravenously circulating microbubble agent can temporarily permeabilize the BBB. Here, we evaluated whether this targeted drug delivery method can be applied safely, reliably, and in a controlled manner on rhesus macaques using a focused ultrasound system. We identified a clear safety window during which BBB disruption could be produced without evident tissue damage, and the acoustic pressure amplitude where the probability for BBB disruption was 50% and was found to be half of the value that would produce tissue damage. Acoustic emission measurements seem promising for predicting BBB disruption and damage. In addition, we conducted repeated BBB disruption to central visual field targets over several weeks in animals trained to conduct complex visual acuity tasks. All animals recovered from each session without behavioral deficits, visual deficits, or loss in visual acuity. Together, our findings show that BBB disruption can be reliably and repeatedly produced without evident histologic or functional damage in a clinically relevant animal model using a clinical device. These results therefore support clinical testing of this noninvasive-targeted drug delivery method.
Spectral discrimination in color blind animals via chromatic aberration and pupil shape
Stubbs, Alexander L.; Stubbs, Christopher W.
2016-01-01
We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide “color-blind” animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. We quantitatively show, through numerical simulations, how chromatic aberration can be exploited to obtain spectral information, especially through nonaxial pupils that are characteristic of coleoid cephalopods. We have also assessed the inherent ambiguity between range and color that is a consequence of the chromatic variation of best focus with wavelength. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins. PMID:27382180
Construction and Evaluation of Animated Teachable Agents
ERIC Educational Resources Information Center
Bodenheimer, Bobby; Williams, Betsy; Kramer, Mattie Ruth; Viswanath, Karun; Balachandran, Ramya; Belynne, Kadira; Biswas, Gautam
2009-01-01
This article describes the design decisions, technical approach, and evaluation of the animation and interface components for an agent-based system that allows learners to learn by teaching. Students learn by teaching an animated agent using a visual representation. The agent can answer questions about what she has been taught and take quizzes.…
Novel graphical environment for virtual and real-world operations of tracked mobile manipulators
NASA Astrophysics Data System (ADS)
Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.
1993-08-01
A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
Phototaxis and the origin of visual eyes
Randel, Nadine
2016-01-01
Vision allows animals to detect spatial differences in environmental light levels. High-resolution image-forming eyes evolved from low-resolution eyes via increases in photoreceptor cell number, improvements in optics and changes in the neural circuits that process spatially resolved photoreceptor input. However, the evolutionary origins of the first low-resolution visual systems have been unclear. We propose that the lowest resolving (two-pixel) visual systems could initially have functioned in visual phototaxis. During visual phototaxis, such elementary visual systems compare light on either side of the body to regulate phototactic turns. Another, even simpler and non-visual strategy is characteristic of helical phototaxis, mediated by sensory–motor eyespots. The recent mapping of the complete neural circuitry (connectome) of an elementary visual system in the larva of the annelid Platynereis dumerilii sheds new light on the possible paths from non-visual to visual phototaxis and to image-forming vision. We outline an evolutionary scenario focusing on the neuronal circuitry to account for these transitions. We also present a comprehensive review of the structure of phototactic eyes in invertebrate larvae and assign them to the non-visual and visual categories. We propose that non-visual systems may have preceded visual phototactic systems in evolution that in turn may have repeatedly served as intermediates during the evolution of image-forming eyes. PMID:26598725
PLANETarium Pilot: visualizing PLANET Earth inside-out on the planetarium's full-dome
NASA Astrophysics Data System (ADS)
Ballmer, Maxim; Wiethoff, Tobias
2016-04-01
In the past decade, projection systems in most planetariums, traditional sites of outreach and education, have advanced from interfaces that can display the motion of stars as moving beam spots to systems that are able to visualize multicolor, high-resolution, immersive full-dome videos or images. These extraordinary capabilities are ideally suited for visualization of global processes occurring on the surface and within the interior of the Earth, a spherical body just as the full dome. So far, however, our community has largely ignored this wonderful interface for outreach and education, and any previous geo-shows have mostly been limited to cartoon-style animations. Thus, we here propose a framework to convey recent scientific results on the origin and evolution of our PLANET to the >100 million per-year worldwide audience of planetariums, making the traditionally astronomy-focussed interface a true PLANETarium. In order to do this most efficiently, we intend to show "inside-out" visualizations of scientific datasets and models, as if the audience was positioned in the Earth's core. Such visualizations are expected to be renderable to the dome with little or no effort. For example, showing global geophysical datasets (e.g., gravity, air temperature), or horizontal slices of seismic-tomography images and spherical computer models requires no rendering at all. Rendering of 3D Cartesian datasets or models may further be achieved using standard techiques. Here, we show several example pilot animations. These animations rendered for the full dome are projected back to 2D for visualization on the flatscreen. Present-day science visualizations are typically as intuitive as cartoon-style animations, yet more appealing visually, and clearly with a higher level of detail. In addition to e.g. climate change and natural hazards, themes for any future geo-shows may include the coupled evolution of the Earth's interior and life, from the accretion of our planet to the evolution of mantle convection as well as the sustainment of a magnetic field and habitable conditions. We believe that high-quality tax-funded science visualizations should not exclusively be used for communication among scientists, but also recycled to raise the public's awareness and appreciation of the Geosciences.
PLANETarium Pilot: visualizing PLANET Earth inside-out on the planetarium's full-dome
NASA Astrophysics Data System (ADS)
Ballmer, M. D.; Wiethoff, T.
2014-12-01
In the past decade, projection systems in most planetariums, traditional sites of outreach and education, have advanced from interfaces that can display the motion of stars as moving beam spots to systems that are able to visualize multicolor, high-resolution, immersive full-dome videos or images. These extraordinary capabilities are ideally suited for visualization of global processes occurring on the surface and within the interior of the Earth, a spherical body just as the full dome. So far, however, our community has largely ignored this wonderful interface for outreach and education, and any previous geo-shows have mostly been limited to cartoon-style animations. Thus, we here propose a framework to convey recent scientific results on the origin and evolution of our PLANET to the >100 million per-year worldwide audience of planetariums, making the traditionally astronomy-focussed interface a true PLANETarium. In order to do this most efficiently, we intend to show „inside-out" visualizations of scientific datasets and models, as if the audience was positioned in the Earth's inner core. Such visualizations are expected to be renderable to the dome with little or no effort. For example, showing global geophysical datasets (e.g., gravity, air temperature), or horizontal slices of seismic-tomography images and spherical computer models requires no rendering at all. Rendering of 3D Cartesian datasets or models may further be achieved using standard techiques. Here, we show several example pilot animations. These animations rendered for the full dome are projected back to 2D for visualization on a flatscreen. Present-day science visualizations are typically as intuitive as cartoon-style animations, yet more appealing visually, and clearly with a higher level of detail. In addition to e.g. climate change and natural hazards, themes for any future geo-shows may include the coupled evolution of the Earth's interior and life, from the accretion of our planet to the evolution of mantle convection as well as the sustainment of a magnetic field and habitable conditions. We believe that high-quality tax-funded science visualizations should not exclusively be used for communication among scientists, but also recycled to raise the public's awareness and appreciation of the geosciences.
Aguiar, Paulo; Mendonça, Luís; Galhardo, Vasco
2007-10-15
Operant animal behavioral tests require the interaction of the subject with sensors and actuators distributed in the experimental environment of the arena. In order to provide user independent reliable results and versatile control of these devices it is vital to use an automated control system. Commercial systems for control of animal mazes are usually based in software implementations that restrict their application to the proprietary hardware of the vendor. In this paper we present OpenControl: an opensource Visual Basic software that permits a Windows-based computer to function as a system to run fully automated behavioral experiments. OpenControl integrates video-tracking of the animal, definition of zones from the video signal for real-time assignment of animal position in the maze, control of the maze actuators from either hardware sensors or from the online video tracking, and recording of experimental data. Bidirectional communication with the maze hardware is achieved through the parallel-port interface, without the need for expensive AD-DA cards, while video tracking is attained using an inexpensive Firewire digital camera. OpenControl Visual Basic code is structurally general and versatile allowing it to be easily modified or extended to fulfill specific experimental protocols and custom hardware configurations. The Visual Basic environment was chosen in order to allow experimenters to easily adapt the code and expand it at their own needs.
Functional mapping of the primate auditory system.
Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer
2003-01-24
Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.
Vestibular-visual interactions in flight simulators
NASA Technical Reports Server (NTRS)
Clark, B.
1977-01-01
The following research work is reported: (1) vestibular-visual interactions; (2) flight management and crew system interactions; (3) peripheral cue utilization in simulation technology; (4) control of signs and symptoms of motion sickness; (5) auditory cue utilization in flight simulators, and (6) vestibular function: Animal experiments.
Designing a visualization system for hydrological data
NASA Astrophysics Data System (ADS)
Fuhrmann, Sven
2000-02-01
The field of hydrology is, as any other scientific field, strongly affected by a massive technological evolution. The spread of modern information and communication technology within the last three decades has led to an increased collection, availability and use of spatial and temporal digital hydrological data. In a two-year research period a working group in Muenster applied and developed methods for the visualization of digital hydrological data and the documentation of hydrological models. A low-cost multimedial, hydrological visualization system (HydroVIS) for the Weser river catchment was developed. The research group designed HydroVIS under freeware constraints and tried to show what kind of multimedia visualization techniques can be effectively used with a nonprofit hydrological visualization system. The system's visual components include features such as electronic maps, temporal and nontemporal cartographic animations, the display of geologic profiles, interactive diagrams and hypertext, including photographs and tables.
Asymmetric top-down modulation of ascending visual pathways in pigeons.
Freund, Nadja; Valencia-Alfonso, Carlos E; Kirsch, Janina; Brodmann, Katja; Manns, Martina; Güntürkün, Onur
2016-03-01
Cerebral asymmetries are a ubiquitous phenomenon evident in many species, incl. humans, and they display some similarities in their organization across vertebrates. In many species the left hemisphere is associated with the ability to categorize objects based on abstract or experience-based behaviors. Using the asymmetrically organized visual system of pigeons as an animal model, we show that descending forebrain pathways asymmetrically modulate visually evoked responses of single thalamic units. Activity patterns of neurons within the nucleus rotundus, the largest thalamic visual relay structure in birds, were differently modulated by left and right hemispheric descending systems. Thus, visual information ascending towards the left hemisphere was modulated by forebrain top-down systems at thalamic level, while right thalamic units were strikingly less modulated. This asymmetry of top-down control could promote experience-based processes within the left hemisphere, while biasing the right side towards stimulus-bound response patterns. In a subsequent behavioral task we tested the possible functional impact of this asymmetry. Under monocular conditions, pigeons learned to discriminate color pairs, so that each hemisphere was trained on one specific discrimination. Afterwards the animals were presented with stimuli that put the hemispheres in conflict. Response patterns on the conflicting stimuli revealed a clear dominance of the left hemisphere. Transient inactivation of left hemispheric top-down control reduced this dominance while inactivation of right hemispheric top-down control had no effect on response patterns. Functional asymmetries of descending systems that modify visual ascending pathways seem to play an important role in the superiority of the left hemisphere in experience-based visual tasks. Copyright © 2015. Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Harris, Daniel Wyatt
2012-01-01
Research examining animation use for student learning has been conducted in the last two decades across a multitude of instructional environments and content areas. The extensive construction and implementation of animations in learning resulted from the availability of powerful computing systems and the perceived advantages the novel medium…
Depicting surgical anatomy of the porta hepatis in living donor liver transplantation.
Kelly, Paul; Fung, Albert; Qu, Joy; Greig, Paul; Tait, Gordon; Jenkinson, Jodie; McGilvray, Ian; Agur, Anne
2017-01-01
Visualizing the complex anatomy of vascular and biliary structures of the liver on a case-by-case basis has been challenging. A living donor liver transplant (LDLT) right hepatectomy case, with focus on the porta hepatis, was used to demonstrate an innovative method to visualize anatomy with the purpose of refining preoperative planning and teaching of complex surgical procedures. The production of an animation-enhanced video consisted of many stages including the integration of pre-surgical planning; case-specific footage and 3D models of the liver and associated vasculature, reconstructed from contrast-enhanced CTs. Reconstructions of the biliary system were modeled from intraoperative cholangiograms. The distribution of the donor portal veins, hepatic arteries and bile ducts was defined from the porta hepatis intrahepatically to the point of surgical division. Each step of the surgery was enhanced with 3D animation to provide sequential and seamless visualization from pre-surgical planning to outcome. Use of visualization techniques such as transparency and overlays allows viewers not only to see the operative field, but also the origin and course of segmental branches and their spatial relationships. This novel educational approach enables integrating case-based operative footage with advanced editing techniques for visualizing not only the surgical procedure, but also complex anatomy such as vascular and biliary structures. The surgical team has found this approach to be beneficial for preoperative planning and clinical teaching, especially for complex cases. Each animation-enhanced video case is posted to the open-access Toronto Video Atlas of Surgery (TVASurg), an education resource with a global clinical and patient user base. The novel educational system described in this paper enables integrating operative footage with 3D animation and cinematic editing techniques for seamless sequential organization from pre-surgical planning to outcome.
Depicting surgical anatomy of the porta hepatis in living donor liver transplantation
Fung, Albert; Qu, Joy; Greig, Paul; Tait, Gordon; Jenkinson, Jodie; McGilvray, Ian; Agur, Anne
2017-01-01
Visualizing the complex anatomy of vascular and biliary structures of the liver on a case-by-case basis has been challenging. A living donor liver transplant (LDLT) right hepatectomy case, with focus on the porta hepatis, was used to demonstrate an innovative method to visualize anatomy with the purpose of refining preoperative planning and teaching of complex surgical procedures. The production of an animation-enhanced video consisted of many stages including the integration of pre-surgical planning; case-specific footage and 3D models of the liver and associated vasculature, reconstructed from contrast-enhanced CTs. Reconstructions of the biliary system were modeled from intraoperative cholangiograms. The distribution of the donor portal veins, hepatic arteries and bile ducts was defined from the porta hepatis intrahepatically to the point of surgical division. Each step of the surgery was enhanced with 3D animation to provide sequential and seamless visualization from pre-surgical planning to outcome. Use of visualization techniques such as transparency and overlays allows viewers not only to see the operative field, but also the origin and course of segmental branches and their spatial relationships. This novel educational approach enables integrating case-based operative footage with advanced editing techniques for visualizing not only the surgical procedure, but also complex anatomy such as vascular and biliary structures. The surgical team has found this approach to be beneficial for preoperative planning and clinical teaching, especially for complex cases. Each animation-enhanced video case is posted to the open-access Toronto Video Atlas of Surgery (TVASurg), an education resource with a global clinical and patient user base. The novel educational system described in this paper enables integrating operative footage with 3D animation and cinematic editing techniques for seamless sequential organization from pre-surgical planning to outcome. PMID:29078606
Age-Dependent Ocular Dominance Plasticity in Adult Mice
Lehmann, Konrad; Löwel, Siegrid
2008-01-01
Background Short monocular deprivation (4 days) induces a shift in the ocular dominance of binocular neurons in the juvenile mouse visual cortex but is ineffective in adults. Recently, it has been shown that an ocular dominance shift can still be elicited in young adults (around 90 days of age) by longer periods of deprivation (7 days). Whether the same is true also for fully mature animals is not yet known. Methodology/Principal Findings We therefore studied the effects of different periods of monocular deprivation (4, 7, 14 days) on ocular dominance in C57Bl/6 mice of different ages (25 days, 90–100 days, 109–158 days, 208–230 days) using optical imaging of intrinsic signals. In addition, we used a virtual optomotor system to monitor visual acuity of the open eye in the same animals during deprivation. We observed that ocular dominance plasticity after 7 days of monocular deprivation was pronounced in young adult mice (90–100 days) but significantly weaker already in the next age group (109–158 days). In animals older than 208 days, ocular dominance plasticity was absent even after 14 days of monocular deprivation. Visual acuity of the open eye increased in all age groups, but this interocular plasticity also declined with age, although to a much lesser degree than the optically detected ocular dominance shift. Conclusions/Significance These data indicate that there is an age-dependence of both ocular dominance plasticity and the enhancement of vision after monocular deprivation in mice: ocular dominance plasticity in binocular visual cortex is most pronounced in young animals, reduced but present in adolescence and absent in fully mature animals older than 110 days of age. Mice are thus not basically different in ocular dominance plasticity from cats and monkeys which is an absolutely essential prerequisite for their use as valid model systems of human visual disorders. PMID:18769674
Living Color Frame System: PC graphics tool for data visualization
NASA Technical Reports Server (NTRS)
Truong, Long V.
1993-01-01
Living Color Frame System (LCFS) is a personal computer software tool for generating real-time graphics applications. It is highly applicable for a wide range of data visualization in virtual environment applications. Engineers often use computer graphics to enhance the interpretation of data under observation. These graphics become more complicated when 'run time' animations are required, such as found in many typical modern artificial intelligence and expert systems. Living Color Frame System solves many of these real-time graphics problems.
Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai
2009-01-01
Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.
Micro-CT images reconstruction and 3D visualization for small animal studying
NASA Astrophysics Data System (ADS)
Gong, Hui; Liu, Qian; Zhong, Aijun; Ju, Shan; Fang, Quan; Fang, Zheng
2005-01-01
A small-animal x-ray micro computed tomography (micro-CT) system has been constructed to screen laboratory small animals and organs. The micro-CT system consists of dual fiber-optic taper-coupled CCD detectors with a field-of-view of 25x50 mm2, a microfocus x-ray source, a rotational subject holder. For accurate localization of rotation center, coincidence between the axis of rotation and centre of image was studied by calibration with a polymethylmethacrylate cylinder. Feldkamp"s filtered back-projection cone-beam algorithm is adopted for three-dimensional reconstruction on account of the effective corn-beam angle is 5.67° of the micro-CT system. 200x1024x1024 matrix data of micro-CT is obtained with the magnification of 1.77 and pixel size of 31x31μm2. In our reconstruction software, output image size of micro-CT slices data, magnification factor and rotation sample degree can be modified in the condition of different computational efficiency and reconstruction region. The reconstructed image matrix data is processed and visualization by Visualization Toolkit (VTK). Data parallelism of VTK is performed in surface rendering of reconstructed data in order to improve computing speed. Computing time of processing a 512x512x512 matrix datasets is about 1/20 compared with serial program when 30 CPU is used. The voxel size is 54x54x108 μm3. The reconstruction and 3-D visualization images of laboratory rat ear are presented.
SSC San Diego Biennial Review 2003. Command and Control
2003-01-01
systems. IMAT systems use scientific visualizations, three- dimensional graphics, and animations to illustrate com- plex physical interactions in mission...Again, interactive animations are used to explain underlying concepts. For exam- ple, for principles of beamforming using a phased array, a three...solve complex problems. Experts type natural language text, use mouse clicks to provide hints for explanation generation, and use mouse clicks to
Lindemann, J P; Kern, R; Michaelis, C; Meyer, P; van Hateren, J H; Egelhaaf, M
2003-03-01
A high-speed panoramic visual stimulation device is introduced which is suitable to analyse visual interneurons during stimulation with rapid image displacements as experienced by fast moving animals. The responses of an identified motion sensitive neuron in the visual system of the blowfly to behaviourally generated image sequences are very complex and hard to predict from the established input circuitry of the neuron. This finding suggests that the computational significance of visual interneurons can only be assessed if they are characterised not only by conventional stimuli as are often used for systems analysis, but also by behaviourally relevant input.
Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val
1989-01-01
Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.
Mid-level perceptual features contain early cues to animacy.
Long, Bria; Störmer, Viola S; Alvarez, George A
2017-06-01
While substantial work has focused on how the visual system achieves basic-level recognition, less work has asked about how it supports large-scale distinctions between objects, such as animacy and real-world size. Previous work has shown that these dimensions are reflected in our neural object representations (Konkle & Caramazza, 2013), and that objects of different real-world sizes have different mid-level perceptual features (Long, Konkle, Cohen, & Alvarez, 2016). Here, we test the hypothesis that animates and manmade objects also differ in mid-level perceptual features. To do so, we generated synthetic images of animals and objects that preserve some texture and form information ("texforms"), but are not identifiable at the basic level. We used visual search efficiency as an index of perceptual similarity, as search is slower when targets are perceptually similar to distractors. Across three experiments, we find that observers can find animals faster among objects than among other animals, and vice versa, and that these results hold when stimuli are reduced to unrecognizable texforms. Electrophysiological evidence revealed that this mixed-animacy search advantage emerges during early stages of target individuation, and not during later stages associated with semantic processing. Lastly, we find that perceived curvature explains part of the mixed-animacy search advantage and that observers use perceived curvature to classify texforms as animate/inanimate. Taken together, these findings suggest that mid-level perceptual features, including curvature, contain cues to whether an object may be animate versus manmade. We propose that the visual system capitalizes on these early cues to facilitate object detection, recognition, and classification.
Knowledge Acquisition with Static and Animated Pictures in Computer-Based Learning.
ERIC Educational Resources Information Center
Schnotz, Wolfgang; Grzondziel, Harriet
In educational settings, computers provide specific possibilities of visualizing information for instructional purposes. Besides the use of static pictures, computers can present animated pictures which allow exploratory manipulation by the learner and display the dynamic behavior of a system. This paper develops a theoretical framework for…
Engineering visualization utilizing advanced animation
NASA Technical Reports Server (NTRS)
Sabionski, Gunter R.; Robinson, Thomas L., Jr.
1989-01-01
Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed.
A visual servo-based teleoperation robot system for closed diaphyseal fracture reduction.
Li, Changsheng; Wang, Tianmiao; Hu, Lei; Zhang, Lihai; Du, Hailong; Zhao, Lu; Wang, Lifeng; Tang, Peifu
2015-09-01
Common fracture treatments include open reduction and intramedullary nailing technology. However, these methods have disadvantages such as intraoperative X-ray radiation, delayed union or nonunion and postoperative rotation. Robots provide a novel solution to the aforementioned problems while posing new challenges. Against this scientific background, we develop a visual servo-based teleoperation robot system. In this article, we present a robot system, analyze the visual servo-based control system in detail and develop path planning for fracture reduction, inverse kinematics, and output forces of the reduction mechanism. A series of experimental tests is conducted on a bone model and an animal bone. The experimental results demonstrate the feasibility of the robot system. The robot system uses preoperative computed tomography data to realize high precision and perform minimally invasive teleoperation for fracture reduction via the visual servo-based control system while protecting surgeons from radiation. © IMechE 2015.
Suzuki, Daichi G; Murakami, Yasunori; Escriva, Hector; Wada, Hiroshi
2015-02-01
Vertebrates are equipped with so-called camera eyes, which provide them with image-forming vision. Vertebrate image-forming vision evolved independently from that of other animals and is regarded as a key innovation for enhancing predatory ability and ecological success. Evolutionary changes in the neural circuits, particularly the visual center, were central for the acquisition of image-forming vision. However, the evolutionary steps, from protochordates to jaw-less primitive vertebrates and then to jawed vertebrates, remain largely unknown. To bridge this gap, we present the detailed development of retinofugal projections in the lamprey, the neuroarchitecture in amphioxus, and the brain patterning in both animals. Both the lateral eye in larval lamprey and the frontal eye in amphioxus project to a light-detecting visual center in the caudal prosencephalic region marked by Pax6, which possibly represents the ancestral state of the chordate visual system. Our results indicate that the visual system of the larval lamprey represents an evolutionarily primitive state, forming a link from protochordates to vertebrates and providing a new perspective of brain evolution based on developmental mechanisms and neural functions. © 2014 Wiley Periodicals, Inc.
Applications of CFD and visualization techniques
NASA Technical Reports Server (NTRS)
Saunders, James H.; Brown, Susan T.; Crisafulli, Jeffrey J.; Southern, Leslie A.
1992-01-01
In this paper, three applications are presented to illustrate current techniques for flow calculation and visualization. The first two applications use a commercial computational fluid dynamics (CFD) code, FLUENT, performed on a Cray Y-MP. The results are animated with the aid of data visualization software, apE. The third application simulates a particulate deposition pattern using techniques inspired by developments in nonlinear dynamical systems. These computations were performed on personal computers.
Superior visual performance in nocturnal insects: neural principles and bio-inspired technologies
NASA Astrophysics Data System (ADS)
Warrant, Eric J.
2016-04-01
At night, our visual capacities are severely reduced, with a complete loss in our ability to see colour and a dramatic loss in our ability to see fine spatial and temporal details. This is not the case for many nocturnal animals, notably insects. Our recent work, particularly on fast-flying moths and bees and on ball-rolling dung beetles, has shown that nocturnal animals are able to distinguish colours, to detect faint movements, to learn visual landmarks, to orient to the faint pattern of polarised light produced by the moon and to navigate using the stars. These impressive visual abilities are the result of exquisitely adapted eyes and visual systems, the product of millions of years of evolution. Nocturnal animals typically have highly sensitive eye designs and visual neural circuitry that is optimised for extracting reliable information from dim and noisy visual images. Even though we are only at the threshold of understanding the neural mechanisms responsible for reliable nocturnal vision, growing evidence suggests that the neural summation of photons in space and time is critically important: even though vision in dim light becomes necessarily coarser and slower, it also becomes significantly more reliable. We explored the benefits of spatiotemporal summation by creating a computer algorithm that mimicked nocturnal visual processing strategies. This algorithm dramatically increased the reliability of video collected in dim light, including the preservation of colour, strengthening evidence that summation strategies are essential for nocturnal vision.
Mitchell, Donald E
2008-01-01
To review work on animal models of deprivation amblyopia that points to a special role for binocular visual input in the development of spatial vision and as a component of occlusion (patching) therapy for amblyopia. The studies reviewed employ behavioural methods to measure the effects of various early experiential manipulations on the development of the visual acuity of the two eyes. Short periods of concordant binocular input, if continuous, can offset much longer daily periods of monocular deprivation to allow the development of normal visual acuity in both eyes. It appears that the visual system does not weigh all visual input equally in terms of its ability to impact on the development of vision but instead places greater weight on concordant binocular exposure. Experimental models of patching therapy for amblyopia imposed on animals in which amblyopia had been induced by a prior period of early monocular deprivation, indicate that the benefits of patching therapy may be only temporary and decline rapidly after patching is discontinued. However, when combined with critical amounts of binocular visual input each day, the benefits of patching can be both heightened and made permanent. Taken together with demonstrations of retained binocular connections in the visual cortex of monocularly deprived animals, a strong argument is made for inclusion of specific training of stereoscopic vision for part of the daily periods of binocular exposure that should be incorporated as part of any patching protocol for amblyopia.
Vision in the dimmest habitats on earth.
Warrant, Eric
2004-10-01
A very large proportion of the world's animal species are active in dim light, either under the cover of night or in the depths of the sea. The worlds they see can be dim and extended, with light reaching the eyes from all directions at once, or they can be composed of bright point sources, like the multitudes of stars seen in a clear night sky or the rare sparks of bioluminescence that are visible in the deep sea. The eye designs of nocturnal and deep-sea animals have evolved in response to these two very different types of habitats, being optimised for maximum sensitivity to extended scenes, or to point sources, or to both. After describing the many visual adaptations that have evolved across the animal kingdom for maximising sensitivity to extended and point-source scenes, I then use case studies from the recent literature to show how these adaptations have endowed nocturnal animals with excellent vision. Nocturnal animals can see colour and negotiate dimly illuminated obstacles during flight. They can also navigate using learned terrestrial landmarks, the constellations of stars or the dim pattern of polarised light formed around the moon. The conclusion from these studies is clear: nocturnal habitats are just as rich in visual details as diurnal habitats are, and nocturnal animals have evolved visual systems capable of exploiting them. The same is certainly true of deep-sea animals, as future research will no doubt reveal.
Neural Pathways Conveying Novisual Information to the Visual Cortex
2013-01-01
The visual cortex has been traditionally considered as a stimulus-driven, unimodal system with a hierarchical organization. However, recent animal and human studies have shown that the visual cortex responds to non-visual stimuli, especially in individuals with visual deprivation congenitally, indicating the supramodal nature of the functional representation in the visual cortex. To understand the neural substrates of the cross-modal processing of the non-visual signals in the visual cortex, we firstly showed the supramodal nature of the visual cortex. We then reviewed how the nonvisual signals reach the visual cortex. Moreover, we discussed if these non-visual pathways are reshaped by early visual deprivation. Finally, the open question about the nature (stimulus-driven or top-down) of non-visual signals is also discussed. PMID:23840972
NASA Astrophysics Data System (ADS)
Lahti, Paul M.; Motyka, Eric J.; Lancashire, Robert J.
2000-05-01
A straightforward procedure is described to combine computation of molecular vibrational modes using commonly available molecular modeling programs with visualization of the modes using advanced features of the MDL Information Systems Inc. Chime World Wide Web browser plug-in. Minor editing of experimental spectra that are stored in the JCAMP-DX format allows linkage of IR spectral frequency ranges to Chime molecular display windows. The spectra and animation files can be combined by Hypertext Markup Language programming to allow interactive linkage between experimental spectra and computationally generated vibrational displays. Both the spectra and the molecular displays can be interactively manipulated to allow the user maximum control of the objects being viewed. This procedure should be very valuable not only for aiding students through visual linkage of spectra and various vibrational animations, but also by assisting them in learning the advantages and limitations of computational chemistry by comparison to experiment.
Marine bioacoustics and technology: The new world of marine acoustic ecology
NASA Astrophysics Data System (ADS)
Hastings, Mardi C.; Au, Whitlow W. L.
2012-11-01
Marine animals use sound for communication, navigation, predator avoidance, and prey detection. Thus the rise in acoustic energy associated with increasing human activity in the ocean has potential to impact the lives of marine animals. Thirty years ago marine bioacoustics primarily focused on evaluating effects of human-generated sound on hearing and behavior by testing captive animals and visually observing wild animals. Since that time rapidly changing electronic and computing technologies have yielded three tools that revolutionized how bioacousticians study marine animals. These tools are (1) portable systems for measuring electrophysiological auditory evoked potentials, (2) miniaturized tags equipped with positioning sensors and acoustic recording devices for continuous short-term acoustical observation rather than intermittent visual observation, and (3) passive acoustic monitoring (PAM) systems for remote long-term acoustic observations at specific locations. The beauty of these breakthroughs is their direct applicability to wild animals in natural habitats rather than only to animals held in captivity. Hearing capabilities of many wild species including polar bears, beaked whales, and reef fishes have now been assessed by measuring their auditory evoked potentials. Miniaturized acoustic tags temporarily attached to an animal to record its movements and acoustic environment have revealed the acoustic foraging behavior of sperm and beaked whales. Now tags are being adapted to fishes in effort to understand their behavior in the presence of noise. Moving and static PAM systems automatically detect and characterize biological and physical features of an ocean area without adding any acoustic energy to the environment. PAM is becoming a powerful technique for understanding and managing marine habitats. This paper will review the influence of these transformative tools on the knowledge base of marine bioacoustics and elucidation of relationships between marine animals and their acoustic environment, leading to a new, rapidly growing field of marine acoustic ecology.
Visualizing SPH Cataclysmic Variable Accretion Disk Simulations with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.; Wood, Matthew A.
2015-01-01
We present innovative ways to use Blender, a 3D graphics package, to visualize smoothed particle hydrodynamics particle data of cataclysmic variable accretion disks. We focus on the methods of shape key data constructs to increasedata i/o and manipulation speed. The implementation of the methods outlined allow for compositing of the various visualization layers into a final animation. The viewing of the disk in 3D from different angles can allow for a visual analysisof the physical system and orbits. The techniques have a wide ranging set of applications in astronomical visualization,including both observation and theoretical data.
The visual system of diurnal raptors: updated review.
González-Martín-Moro, J; Hernández-Verdejo, J L; Clement-Corral, A
2017-05-01
Diurnal birds of prey (raptors) are considered the group of animals with highest visual acuity (VA). The purpose of this work is to review all the information recently published about the visual system of this group of animals. A bibliographic search was performed in PubMed. The algorithm used was (raptor OR falcon OR kestrel OR hawk OR eagle) AND (vision OR «visual acuity» OR eye OR macula OR retina OR fovea OR «nictitating membrane» OR «chromatic vision» OR ultraviolet). The search was restricted to the «Title» and «Abstract» fields, and to non-human species, without time restriction. The proposed algorithm located 97 articles. Birds of prey are endowed with the highest VA of the animal kingdom. However most of the works study one individual or a small group of individuals, and the methodology is heterogeneous. The most studied bird is the Peregrine falcon (Falco peregrinus), with an estimated VA of 140 cycles/degree. Some eagles are endowed with similar VA. The tubular shape of the eye, the large pupil, and a high density of photoreceptors make this extraordinary VA possible. In some species, histology and optic coherence tomography demonstrate the presence of 2foveas. The nasal fovea (deep fovea) has higher VA. Nevertheless, the exact function of each fovea is unknown. The vitreous contained in the deep fovea could behave as a third lens, adding some magnification to the optic system. Copyright © 2017 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.
Visual defects in a mouse model of fetal alcohol spectrum disorder.
Lantz, Crystal L; Pulimood, Nisha S; Rodrigues-Junior, Wandilson S; Chen, Ching-Kang; Manhaes, Alex C; Kalatsky, Valery A; Medina, Alexandre Esteves
2014-01-01
Alcohol consumption during pregnancy can lead to a multitude of neurological problems in offspring, varying from subtle behavioral changes to severe mental retardation. These alterations are collectively referred to as Fetal Alcohol Spectrum Disorders (FASD). Early alcohol exposure can strongly affect the visual system and children with FASD can exhibit an amblyopia-like pattern of visual acuity deficits even in the absence of optical and oculomotor disruption. Here, we test whether early alcohol exposure can lead to a disruption in visual acuity, using a model of FASD to mimic alcohol consumption in the last months of human gestation. To accomplish this, mice were exposed to ethanol (5 g/kg i.p.) or saline on postnatal days (P) 5, 7, and 9. Two to three weeks later we recorded visually evoked potentials to assess spatial frequency detection and contrast sensitivity, conducted electroretinography (ERG) to further assess visual function and imaged retinotopy using optical imaging of intrinsic signals. We observed that animals exposed to ethanol displayed spatial frequency acuity curves similar to controls. However, ethanol-treated animals showed a significant deficit in contrast sensitivity. Moreover, ERGs revealed a market decrease in both a- and b-waves amplitudes, and optical imaging suggest that both elevation and azimuth maps in ethanol-treated animals have a 10-20° greater map tilt compared to saline-treated controls. Overall, our findings suggest that binge alcohol drinking restricted to the last months of gestation in humans can lead to marked deficits in visual function.
Camouflage and visual perception
Troscianko, Tom; Benton, Christopher P.; Lovell, P. George; Tolhurst, David J.; Pizlo, Zygmunt
2008-01-01
How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment—a ‘constraint’ here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects. PMID:18990671
Enciso, R; Memon, A; Mah, J
2003-01-01
The research goal at the Craniofacial Virtual Reality Laboratory of the School of Dentistry in conjunction with the Integrated Media Systems Center, School of Engineering, University of Southern California, is to develop computer methods to accurately visualize patients in three dimensions using advanced imaging and data acquisition devices such as cone-beam computerized tomography (CT) and mandibular motion capture. Data from these devices were integrated for three-dimensional (3D) patient-specific visualization, modeling and animation. Generic methods are in development that can be used with common CT image format (DICOM), mesh format (STL) and motion data (3D position over time). This paper presents preliminary descriptive studies on: 1) segmentation of the lower and upper jaws with two types of CT data--(a) traditional whole head CT data and (b) the new dental Newtom CT; 2) manual integration of accurate 3D tooth crowns with the segmented lower jaw 3D model; 3) realistic patient-specific 3D animation of the lower jaw.
Ensminger, Amanda L.; Shawkey, Matthew D.; Lucas, Jeffrey R.; Fernández-Juricic, Esteban
2017-01-01
ABSTRACT Variation in male signal production has been extensively studied because of its relevance to animal communication and sexual selection. Although we now know much about the mechanisms that can lead to variation between males in the properties of their signals, there is still a general assumption that there is little variation in terms of how females process these male signals. Variation between females in signal processing may lead to variation between females in how they rank individual males, meaning that one single signal may not be universally attractive to all females. We tested this assumption in a group of female wild-caught brown-headed cowbirds (Molothrus ater), a species that uses a male visual signal (e.g. a wingspread display) to make its mate-choice decisions. We found that females varied in two key parameters of their visual sensory systems related to chromatic and achromatic vision: cone densities (both total and proportions) and cone oil droplet absorbance. Using visual chromatic and achromatic contrast modeling, we then found that this between-individual variation in visual physiology leads to significant between-individual differences in how females perceive chromatic and achromatic male signals. These differences may lead to variation in female preferences for male visual signals, which would provide a potential mechanism for explaining individual differences in mate-choice behavior. PMID:29247048
Ronald, Kelly L; Ensminger, Amanda L; Shawkey, Matthew D; Lucas, Jeffrey R; Fernández-Juricic, Esteban
2017-12-15
Variation in male signal production has been extensively studied because of its relevance to animal communication and sexual selection. Although we now know much about the mechanisms that can lead to variation between males in the properties of their signals, there is still a general assumption that there is little variation in terms of how females process these male signals. Variation between females in signal processing may lead to variation between females in how they rank individual males, meaning that one single signal may not be universally attractive to all females. We tested this assumption in a group of female wild-caught brown-headed cowbirds ( Molothrus ater ), a species that uses a male visual signal (e.g. a wingspread display) to make its mate-choice decisions. We found that females varied in two key parameters of their visual sensory systems related to chromatic and achromatic vision: cone densities (both total and proportions) and cone oil droplet absorbance. Using visual chromatic and achromatic contrast modeling, we then found that this between-individual variation in visual physiology leads to significant between-individual differences in how females perceive chromatic and achromatic male signals. These differences may lead to variation in female preferences for male visual signals, which would provide a potential mechanism for explaining individual differences in mate-choice behavior. © 2017. Published by The Company of Biologists Ltd.
Vision for navigation: What can we learn from ants?
Graham, Paul; Philippides, Andrew
2017-09-01
The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Visual impairment in FOXG1-mutated individuals and mice.
Boggio, E M; Pancrazi, L; Gennaro, M; Lo Rizzo, C; Mari, F; Meloni, I; Ariani, F; Panighini, A; Novelli, E; Biagioni, M; Strettoi, E; Hayek, J; Rufa, A; Pizzorusso, T; Renieri, A; Costa, M
2016-06-02
The Forkead Box G1 (FOXG1 in humans, Foxg1 in mice) gene encodes for a DNA-binding transcription factor, essential for the development of the telencephalon in mammalian forebrain. Mutations in FOXG1 have been reported to be involved in the onset of Rett Syndrome, for which sequence alterations of MECP2 and CDKL5 are known. While visual alterations are not classical hallmarks of Rett syndrome, an increasing body of evidence shows visual impairment in patients and in MeCP2 and CDKL5 animal models. Herein we focused on the functional role of FOXG1 in the visual system of animal models (Foxg1(+/Cre) mice) and of a cohort of subjects carrying FOXG1 mutations or deletions. Visual physiology of Foxg1(+/Cre) mice was assessed by visually evoked potentials, which revealed a significant reduction in response amplitude and visual acuity with respect to wild-type littermates. Morphological investigation showed abnormalities in the organization of excitatory/inhibitory circuits in the visual cortex. No alterations were observed in retinal structure. By examining a cohort of FOXG1-mutated individuals with a panel of neuro-ophthalmological assessments, we found that all of them exhibited visual alterations compatible with high-level visual dysfunctions. In conclusion our data show that Foxg1 haploinsufficiency results in an impairment of mouse and human visual cortical function. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
A visual system for scoring body condition of Asian elephants (Elephas maximus).
Wijeyamohan, Shanmugasundaram; Treiber, Kibby; Schmitt, Dennis; Santiapillai, Charles
2015-01-01
A body condition score (BCS) may provide information on the health or production potential of an animal; it may also reflect the suitability of the environment to maintain an animal population. Thus assessing the BCS of Asian elephants is important for their management. There is a need for a robust BCS applicable to both wild and captive elephants of all age categories based on the minimum and maximum possible subcutaneous body fat and muscle deposits. The visually based system for scoring the body condition of elephants presented here satisfies these criteria and is quick, inexpensive, non-invasive and user-friendly in the field. The BCS scale correlates (P < 0.05) with morphometric indices such as weight, girth, and skin fold measures. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ingley, Spencer J.; Rahmani Asl, Mohammad; Wu, Chengde; Cui, Rongfeng; Gadelhak, Mahmoud; Li, Wen; Zhang, Ji; Simpson, Jon; Hash, Chelsea; Butkowski, Trisha; Veen, Thor; Johnson, Jerald B.; Yan, Wei; Rosenthal, Gil G.
2015-12-01
Experimental approaches to studying behaviors based on visual signals are ubiquitous, yet these studies are limited by the difficulty of combining realistic models with the manipulation of signals in isolation. Computer animations are a promising way to break this trade-off. However, animations are often prohibitively expensive and difficult to program, thus limiting their utility in behavioral research. We present anyFish 2.0, a user-friendly platform for creating realistic animated 3D fish. anyFish 2.0 dramatically expands anyFish's utility by allowing users to create animations of members of several groups of fish from model systems in ecology and evolution (e.g., sticklebacks, Poeciliids, and zebrafish). The visual appearance and behaviors of the model can easily be modified. We have added several features that facilitate more rapid creation of realistic behavioral sequences. anyFish 2.0 provides a powerful tool that will be of broad use in animal behavior and evolution and serves as a model for transparency, repeatability, and collaboration.
NASA Astrophysics Data System (ADS)
Guy, Nathaniel
This thesis explores new ways of looking at telemetry data, from a time-correlative perspective, in order to see patterns within the data that may suggest root causes of system faults. It was thought initially that visualizing an animated Pearson Correlation Coefficient (PCC) matrix for telemetry channels would be sufficient to give new understanding; however, testing showed that the high dimensionality and inability to easily look at change over time in this approach impeded understanding. Different correlative techniques, combined with the time curve visualization proposed by Bach et al (2015), were adapted to visualize both raw telemetry and telemetry data correlations. Review revealed that these new techniques give insights into the data, and an intuitive grasp of data families, which show the effectiveness of this approach for enhancing system understanding and assisting with root cause analysis for complex aerospace systems.
Stowasser, Annette; Mohr, Sarah; Buschbeck, Elke; Vilinsky, Ilya
2015-01-01
Students learn best when projects are multidisciplinary, hands-on, and provide ample opportunity for self-driven investigation. We present a teaching unit that leads students to explore relationships between sensory function and ecology. Field studies, which are rare in neurobiology education, are combined with laboratory experiments that assess visual properties of insect eyes, using electroretinography (ERG). Comprised of nearly one million species, insects are a diverse group of animals, living in nearly all habitats and ecological niches. Each of these lifestyles puts different demands on their visual systems, and accordingly, insects display a wide array of eye organizations and specializations. Physiologically relevant differences can be measured using relatively simple extracellular electrophysiological methods that can be carried out with standard equipment, much of which is already in place in most physiology laboratories. The teaching unit takes advantage of the large pool of locally available species, some of which likely show specialized visual properties that can be measured by students. In the course of the experiments, students collect local insects or other arthropods of their choice, are guided to formulate hypotheses about how the visual system of "their" insects might be tuned to the lifestyle of the species, and use ERGs to investigate the insects' visual response dynamics, and both chromatic and temporal properties of the visual system. Students are then guided to interpret their results in both a comparative physiological and ecological context. This set of experiments closely mirrors authentic research and has proven to be a popular, informative and highly engaging teaching tool.
Stowasser, Annette; Mohr, Sarah; Buschbeck, Elke; Vilinsky, Ilya
2015-01-01
Students learn best when projects are multidisciplinary, hands-on, and provide ample opportunity for self-driven investigation. We present a teaching unit that leads students to explore relationships between sensory function and ecology. Field studies, which are rare in neurobiology education, are combined with laboratory experiments that assess visual properties of insect eyes, using electroretinography (ERG). Comprised of nearly one million species, insects are a diverse group of animals, living in nearly all habitats and ecological niches. Each of these lifestyles puts different demands on their visual systems, and accordingly, insects display a wide array of eye organizations and specializations. Physiologically relevant differences can be measured using relatively simple extracellular electrophysiological methods that can be carried out with standard equipment, much of which is already in place in most physiology laboratories. The teaching unit takes advantage of the large pool of locally available species, some of which likely show specialized visual properties that can be measured by students. In the course of the experiments, students collect local insects or other arthropods of their choice, are guided to formulate hypotheses about how the visual system of “their” insects might be tuned to the lifestyle of the species, and use ERGs to investigate the insects’ visual response dynamics, and both chromatic and temporal properties of the visual system. Students are then guided to interpret their results in both a comparative physiological and ecological context. This set of experiments closely mirrors authentic research and has proven to be a popular, informative and highly engaging teaching tool. PMID:26240534
Oculomotor guidance and capture by irrelevant faces.
Devue, Christel; Belopolsky, Artem V; Theeuwes, Jan
2012-01-01
Even though it is generally agreed that face stimuli constitute a special class of stimuli, which are treated preferentially by our visual system, it remains unclear whether faces can capture attention in a stimulus-driven manner. Moreover, there is a long-standing debate regarding the mechanism underlying the preferential bias of selecting faces. Some claim that faces constitute a set of special low-level features to which our visual system is tuned; others claim that the visual system is capable of extracting the meaning of faces very rapidly, driving attentional selection. Those debates continue because many studies contain methodological peculiarities and manipulations that prevent a definitive conclusion. Here, we present a new visual search task in which observers had to make a saccade to a uniquely colored circle while completely irrelevant objects were also present in the visual field. The results indicate that faces capture and guide the eyes more than other animated objects and that our visual system is not only tuned to the low-level features that make up a face but also to its meaning.
ERIC Educational Resources Information Center
Kwasu, Isaac Ali
2015-01-01
The study seek to reveal the importance of instructional visual in educational systems in Bauchi Nigeria. Instructional visual play very significant roles as medium of communication for learning. The research for this article was motivated by this understanding of the need. The study carried out in Nigeria in one of the most challenging state,…
Evolution and ecology of retinal photoreception in early vertebrates.
Collin, Shaun P
2010-01-01
Visual ecology or the relationship between the visual system of an animal and its environment has proven to be a crucial research field for establishing general concepts of adaptation, specialization and evolution. The visual neuroscientist is indeed confronted with a plethora of different visual characteristics, each seemingly optimised for each species' ecological niche, but often without a clear understanding of the evolutionary constraints at play. However, before we are able to fully understand the influence(s) of ecology and phylogeny on visual system design in vertebrates, it is first necessary to understand the basic bauplan of key representatives of each taxa. This review examines photoreception in hagfishes, lampreys, cartilaginous fishes and lungfishes with an eye to their ecology using a range of neurobiological methods including anatomy, microspectrophotometry and molecular genetics. These early vertebrates represent critical stages in evolution and surprisingly possess a level of visual complexity that is almost unrivalled in other vertebrates. 2010 S. Karger AG, Basel.
Rutishauser, Ueli; Kotowicz, Andreas; Laurent, Gilles
2013-01-01
Brain activity often consists of interactions between internal—or on-going—and external—or sensory—activity streams, resulting in complex, distributed patterns of neural activity. Investigation of such interactions could benefit from closed-loop experimental protocols in which one stream can be controlled depending on the state of the other. We describe here methods to present rapid and precisely timed visual stimuli to awake animals, conditional on features of the animal’s on-going brain state; those features are the presence, power and phase of oscillations in local field potentials (LFP). The system can process up to 64 channels in real time. We quantified its performance using simulations, synthetic data and animal experiments (chronic recordings in the dorsal cortex of awake turtles). The delay from detection of an oscillation to the onset of a visual stimulus on an LCD screen was 47.5 ms and visual-stimulus onset could be locked to the phase of ongoing oscillations at any frequency ≤40 Hz. Our software’s architecture is flexible, allowing on-the-fly modifications by experimenters and the addition of new closed-loop control and analysis components through plugins. The source code of our system “StimOMatic” is available freely as open-source. PMID:23473800
Colour thresholds in a coral reef fish
Vorobyev, M.; Marshall, N. J.
2016-01-01
Coral reef fishes are among the most colourful animals in the world. Given the diversity of lifestyles and habitats on the reef, it is probable that in many instances coloration is a compromise between crypsis and communication. However, human observation of this coloration is biased by our primate visual system. Most animals have visual systems that are ‘tuned’ differently to humans; optimized for different parts of the visible spectrum. To understand reef fish colours, we need to reconstruct the appearance of colourful patterns and backgrounds as they are seen through the eyes of fish. Here, the coral reef associated triggerfish, Rhinecanthus aculeatus, was tested behaviourally to determine the limits of its colour vision. This is the first demonstration of behavioural colour discrimination thresholds in a coral reef species and is a critical step in our understanding of communication and speciation in this vibrant colourful habitat. Fish were trained to discriminate between a reward colour stimulus and series of non-reward colour stimuli and the discrimination thresholds were found to correspond well with predictions based on the receptor noise limited visual model and anatomy of the eye. Colour discrimination abilities of both reef fish and a variety of animals can therefore now be predicted using the parameters described here. PMID:27703704
Visualizing projected Climate Changes - the CMIP5 Multi-Model Ensemble
NASA Astrophysics Data System (ADS)
Böttinger, Michael; Eyring, Veronika; Lauer, Axel; Meier-Fleischer, Karin
2017-04-01
Large ensembles add an additional dimension to climate model simulations. Internal variability of the climate system can be assessed for example by multiple climate model simulations with small variations in the initial conditions or by analyzing the spread in large ensembles made by multiple climate models under common protocols. This spread is often used as a measure of uncertainty in climate projections. In the context of the fifth phase of the WCRP's Coupled Model Intercomparison Project (CMIP5), more than 40 different coupled climate models were employed to carry out a coordinated set of experiments. Time series of the development of integral quantities such as the global mean temperature change for all models visualize the spread in the multi-model ensemble. A similar approach can be applied to 2D-visualizations of projected climate changes such as latitude-longitude maps showing the multi-model mean of the ensemble by adding a graphical representation of the uncertainty information. This has been demonstrated for example with static figures in chapter 12 of the last IPCC report (AR5) using different so-called stippling and hatching techniques. In this work, we focus on animated visualizations of multi-model ensemble climate projections carried out within CMIP5 as a way of communicating climate change results to the scientific community as well as to the public. We take a closer look at measures of robustness or uncertainty used in recent publications suitable for animated visualizations. Specifically, we use the ESMValTool [1] to process and prepare the CMIP5 multi-model data in combination with standard visualization tools such as NCL and the commercial 3D visualization software Avizo to create the animations. We compare different visualization techniques such as height fields or shading with transparency for creating animated visualization of ensemble mean changes in temperature and precipitation including corresponding robustness measures. [1] Eyring, V., Righi, M., Lauer, A., Evaldsson, M., Wenzel, S., Jones, C., Anav, A., Andrews, O., Cionni, I., Davin, E. L., Deser, C., Ehbrecht, C., Friedlingstein, P., Gleckler, P., Gottschaldt, K.-D., Hagemann, S., Juckes, M., Kindermann, S., Krasting, J., Kunert, D., Levine, R., Loew, A., Mäkelä, J., Martin, G., Mason, E., Phillips, A. S., Read, S., Rio, C., Roehrig, R., Senftleben, D., Sterl, A., van Ulft, L. H., Walton, J., Wang, S., and Williams, K. D.: ESMValTool (v1.0) - a community diagnostic and performance metrics tool for routine evaluation of Earth system models in CMIP, Geosci. Model Dev., 9, 1747-1802, doi:10.5194/gmd-9-1747-2016, 2016.
Moving in Dim Light: Behavioral and Visual Adaptations in Nocturnal Ants.
Narendra, Ajay; Kamhi, J Frances; Ogawa, Yuri
2017-11-01
Visual navigation is a benchmark information processing task that can be used to identify the consequence of being active in dim-light environments. Visual navigational information that animals use during the day includes celestial cues such as the sun or the pattern of polarized skylight and terrestrial cues such as the entire panorama, canopy pattern, or significant salient features in the landscape. At night, some of these navigational cues are either unavailable or are significantly dimmer or less conspicuous than during the day. Even under these circumstances, animals navigate between locations of importance. Ants are a tractable system for studying navigation during day and night because the fine scale movement of individual animals can be recorded in high spatial and temporal detail. Ant species range from being strictly diurnal, crepuscular, and nocturnal. In addition, a number of species have the ability to change from a day- to a night-active lifestyle owing to environmental demands. Ants also offer an opportunity to identify the evolution of sensory structures for discrete temporal niches not only between species but also within a single species. Their unique caste system with an exclusive pedestrian mode of locomotion in workers and an exclusive life on the wing in males allows us to disentangle sensory adaptations that cater for different lifestyles. In this article, we review the visual navigational abilities of nocturnal ants and identify the optical and physiological adaptations they have evolved for being efficient visual navigators in dim-light. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
High resolution renderings and interactive visualization of the 2006 Huntington Beach experiment
NASA Astrophysics Data System (ADS)
Im, T.; Nayak, A.; Keen, C.; Samilo, D.; Matthews, J.
2006-12-01
The Visualization Center at the Scripps Institution of Oceanography investigates innovative ways to represent graphically interactive 3D virtual landscapes and to produce high resolution, high quality renderings of Earth sciences data and the sensors and instruments used to collect the data . Among the Visualization Center's most recent work is the visualization of the Huntington Beach experiment, a study launched in July 2006 by the Southern California Ocean Observing System (http://www.sccoos.org/) to record and synthesize data of the Huntington Beach coastal region. Researchers and students at the Visualization Center created visual presentations that combine bathymetric data provided by SCCOOS with USGS aerial photography and with 3D polygonal models of sensors created in Maya into an interactive 3D scene using the Fledermaus suite of visualization tools (http://www.ivs3d.com). In addition, the Visualization Center has produced high definition (HD) animations of SCCOOS sensor instruments (e.g. REMUS, drifters, spray glider, nearshore mooring, OCSD/USGS mooring and CDIP mooring) using the Maya modeling and animation software and rendered over multiple nodes of the OptIPuter Visualization Cluster at Scripps. These visualizations are aimed at providing researchers with a broader context of sensor locations relative to geologic characteristics, to promote their use as an educational resource for informal education settings and increasing public awareness, and also as an aid for researchers' proposals and presentations. These visualizations are available for download on the Visualization Center website at http://siovizcenter.ucsd.edu/sccoos/hb2006.php.
NASA Astrophysics Data System (ADS)
Li, Z.
2003-12-01
Application of GIS and visualization technology significantly contributes to the efficiency and success of developing ground-water models in the Twentynine Palms and San Jose areas, California. Visualizations from GIS and other tools can help to formulate the conceptual model by quickly revealing the basinwide geohydrologic characteristics and changes of a ground-water flow system, and by identifying the most influential components of system dynamics. In addition, 3-D visualizations and animations can help validate the conceptual formulation and the numerical calibration of the model by checking for model-input data errors, revealing cause and effect relationships, and identifying hidden design flaws in model layering and other critical flow components. Two case studies will be presented: The first is a desert basin (near the town of Twentynine Palms) characterized by a fault-controlled ground-water flow system. The second is a coastal basin (Santa Clara Valley including the city of San Jose) characterized by complex, temporally variable flow components ¦ including artificial recharge through a large system of ponds and stream channels, dynamically changing inter-layer flow from hundreds of multi-aquifer wells, pumping-driven subsidence and recovery, and climatically variable natural recharge. For the Twentynine Palms area, more than 10,000 historical ground-water level and water-quality measurements were retrieved from the USGS databases. The combined use of GIS and visualization tools allowed these data to be swiftly organized and interpreted, and depicted by water-level and water-quality maps with a variety of themes for different uses. Overlaying and cross-correlating these maps with other hydrological, geological, geophysical, and geochemical data not only helped to quickly identify the major geohydrologic characteristics controlling the natural variation of hydraulic head in space, such as faults, basin-bottom altitude, and aquifer stratigraphies, but also helped to identify the temporal changes induced by human activities, such as pumping. For the San Jose area, a regional-scale ground-water/surface-water flow model was developed with 6 model layers, 360 monthly stress periods, and complex flow components. The model was visualized by creating animations for both hydraulic head and land subsidence. Cell-by-cell flow of individual flow components was also animated. These included simulated infiltration from climatically variable natural recharge, interlayer flow through multi-aquifer well bores, flow gains and losses along stream channels, and storage change in response to system recharge and discharge. These animations were used to examine consistency with other independent observations, such as measured water-level distribution, mapped gaining and losing stream reaches, and INSAR-interpreted subsidence and uplift. In addition, they revealed enormous detail on the spatial and temporal variation of both individual flow components as well as the entire flow system, and thus significantly increased understanding of system dynamics and improved the accuracy of model simulations.
Fast I/O for Massively Parallel Applications
NASA Technical Reports Server (NTRS)
OKeefe, Matthew T.
1996-01-01
The two primary goals for this report were the design, contruction and modeling of parallel disk arrays for scientific visualization and animation, and a study of the IO requirements of highly parallel applications. In addition, further work in parallel display systems required to project and animate the very high-resolution frames resulting from our supercomputing simulations in ocean circulation and compressible gas dynamics.
Comparison of animated jet stream visualizations
NASA Astrophysics Data System (ADS)
Nocke, Thomas; Hoffmann, Peter
2016-04-01
The visualization of 3D atmospheric phenomena in space and time is still a challenging problem. In particular, multiple solutions of animated jet stream visualizations have been produced in recent years, which were designed to visually analyze and communicate the jet and related impacts on weather circulation patterns and extreme weather events. This PICO integrates popular and new jet animation solutions and inter-compares them. The applied techniques (e.g. stream lines or line integral convolution) and parametrizations (color mapping, line lengths) are discussed with respect to visualization quality criteria and their suitability for certain visualization tasks (e.g. jet patterns and jet anomaly analysis, communicating its relevance for climate change).
Slater, Heather; Milne, Alice E; Wilson, Benjamin; Muers, Ross S; Balezeau, Fabien; Hunter, David; Thiele, Alexander; Griffiths, Timothy D; Petkov, Christopher I
2016-08-30
Head immobilisation is often necessary for neuroscientific procedures. A number of Non-invasive Head Immobilisation Systems (NHIS) for monkeys are available, but the need remains for a feasible integrated system combining a broad range of essential features. We developed an individualised macaque NHIS addressing several animal welfare and scientific needs. The system comprises a customised-to-fit facemask that can be used separately or combined with a back piece to form a full-head helmet. The system permits presentation of visual and auditory stimuli during immobilisation and provides mouth access for reward. The facemask was incorporated into an automated voluntary training system, allowing the animals to engage with it for increasing periods leading to full head immobilisation. We evaluated the system during performance on several auditory or visual behavioural tasks with testing sessions lasting 1.5-2h, used thermal imaging to monitor for and prevent pressure points, and measured head movement using MRI. A comprehensive evaluation of the system is provided in relation to several scientific and animal welfare requirements. Behavioural results were often comparable to those obtained with surgical implants. Cost-benefit analyses were conducted comparing the system with surgical options, highlighting the benefits of implementing the non-invasive option. The system has a number of potential applications and could be an important tool in neuroscientific research, when direct access to the brain for neuronal recordings is not required, offering the opportunity to conduct non-invasive experiments while improving animal welfare and reducing reliance on surgically implanted head posts. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
Nature as a model for biomimetic sensors
NASA Astrophysics Data System (ADS)
Bleckmann, H.
2012-04-01
Mammals, like humans, rely mainly on acoustic, visual and olfactory information. In addition, most also use tactile and thermal cues for object identification and spatial orientation. Most non-mammalian animals also possess a visual, acoustic and olfactory system. However, besides these systems they have developed a large variety of highly specialized sensors. For instance, pyrophilous insects use infrared organs for the detection of forest fires while boas, pythons and pit vipers sense the infrared radiation emitted by prey animals. All cartilaginous and bony fishes as well as some amphibians have a mechnaosensory lateral line. It is used for the detection of weak water motions and pressure gradients. For object detection and spatial orientation many species of nocturnal fish employ active electrolocation. This review describes certain aspects of the detection and processing of infrared, mechano- and electrosensory information. It will be shown that the study of these seemingly exotic sensory systems can lead to discoveries that are useful for the construction of technical sensors and artificial control systems.
NASA Astrophysics Data System (ADS)
Toyoda, Masahiro; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi; Tsutsui, Tatsuo; Sankai, Yoshiyuki
A monopivot centrifugal blood pump, whose impeller is supported with a pivot bearing and a passive magnetic bearing, is under development for implantable artificial heart. The hemolysis level is less than that of commercial centrifugal pumps and the pump size is as small as 160 mL in volume. To solve a problem of thrombus caused by fluid dynamics, flow visualization experiments and animal experiments have been undertaken. For flow visualization a three-fold scale-up model, high-speed video system, and particle tracking velocimetry software were used. To verify non-thrombogenicity one-week animal experiments were conducted with sheep. The initially observed thrombus around the pivot was removed through unifying the separate washout holes to a small centered hole to induce high shear around the pivot. It was found that the thrombus contours corresponded to the shear rate of 300s-1 for red thrombus and 1300-1700s-1 for white thrombus, respectively. Thus flow visualization technique was found to be a useful tool to predict thrombus location.
Vestibular-visual interactions in flight simulators
NASA Technical Reports Server (NTRS)
Clark, B.
1977-01-01
All 139 research papers published under this ten-year program are listed. Experimental work was carried out at the Ames Research Center involving man's sensitivity to rotational acceleration, and psychophysical functioning of the semicircular canals; vestibular-visual interactions and effects of other sensory systems were studied in flight simulator environments. Experiments also dealt with the neurophysiological vestibular functions of animals, and flight management investigations of man-vehicle interactions.
Design of an Image Fusion Phantom for a Small Animal microPET/CT Scanner Prototype
NASA Astrophysics Data System (ADS)
Nava-García, Dante; Alva-Sánchez, Héctor; Murrieta-Rodríguez, Tirso; Martínez-Dávalos, Arnulfo; Rodríguez-Villafuerte, Mercedes
2010-12-01
Two separate microtomography systems recently developed at Instituto de Física, UNAM, produce anatomical (microCT) and physiological images (microPET) of small animals. In this work, the development and initial tests of an image fusion method based on fiducial markers for image registration between the two modalities are presented. A modular Helix/Line-Sources phantom was designed and constructed; this phantom contains fiducial markers that can be visualized in both imaging systems. The registration was carried out by solving the rigid body alignment problem of Procrustes to obtain rotation and translation matrices required to align the two sets of images. The microCT/microPET image fusion of the Helix/Line-Sources phantom shows excellent visual coincidence between different structures, showing a calculated target-registration-error of 0.32 mm.
Einstein, Michael C; Polack, Pierre-Olivier; Tran, Duy T; Golshani, Peyman
2017-05-17
Low-frequency membrane potential ( V m ) oscillations were once thought to only occur in sleeping and anesthetized states. Recently, low-frequency V m oscillations have been described in inactive awake animals, but it is unclear whether they shape sensory processing in neurons and whether they occur during active awake behavioral states. To answer these questions, we performed two-photon guided whole-cell V m recordings from primary visual cortex layer 2/3 excitatory and inhibitory neurons in awake mice during passive visual stimulation and performance of visual and auditory discrimination tasks. We recorded stereotyped 3-5 Hz V m oscillations where the V m baseline hyperpolarized as the V m underwent high amplitude rhythmic fluctuations lasting 1-2 s in duration. When 3-5 Hz V m oscillations coincided with visual cues, excitatory neuron responses to preferred cues were significantly reduced. Despite this disruption to sensory processing, visual cues were critical for evoking 3-5 Hz V m oscillations when animals performed discrimination tasks and passively viewed drifting grating stimuli. Using pupillometry and animal locomotive speed as indicators of arousal, we found that 3-5 Hz oscillations were not restricted to unaroused states and that they occurred equally in aroused and unaroused states. Therefore, low-frequency V m oscillations play a role in shaping sensory processing in visual cortical neurons, even during active wakefulness and decision making. SIGNIFICANCE STATEMENT A neuron's membrane potential ( V m ) strongly shapes how information is processed in sensory cortices of awake animals. Yet, very little is known about how low-frequency V m oscillations influence sensory processing and whether they occur in aroused awake animals. By performing two-photon guided whole-cell recordings from layer 2/3 excitatory and inhibitory neurons in the visual cortex of awake behaving animals, we found visually evoked stereotyped 3-5 Hz V m oscillations that disrupt excitatory responsiveness to visual stimuli. Moreover, these oscillations occurred when animals were in high and low arousal states as measured by animal speed and pupillometry. These findings show, for the first time, that low-frequency V m oscillations can significantly modulate sensory signal processing, even in awake active animals. Copyright © 2017 the authors 0270-6474/17/375084-15$15.00/0.
Implementation of ICARE learning model using visualization animation on biotechnology course
NASA Astrophysics Data System (ADS)
Hidayat, Habibi
2017-12-01
ICARE is a learning model that directly ensure the students to actively participate in the learning process using animation media visualization. ICARE have five key elements of learning experience from children and adult that is introduction, connection, application, reflection and extension. The use of Icare system to ensure that participants have opportunity to apply what have been they learned. So that, the message delivered by lecture to students can be understood and recorded by students in a long time. Learning model that was deemed capable of improving learning outcomes and interest to learn in following learning process Biotechnology with applying the ICARE learning model using visualization animation. This learning model have been giving motivation to participate in the learning process and learning outcomes obtained becomes more increased than before. From the results of student learning in subjects Biotechnology by applying the ICARE learning model using Visualization Animation can improving study results of student from the average value of middle test amounted to 70.98 with the percentage of 75% increased value of final test to be 71.57 with the percentage of 68.63%. The interest to learn from students more increasing visits of student activities at each cycle, namely the first cycle obtained average value by 33.5 with enough category. The second cycle is obtained an average value of 36.5 to good category and third cycle the average value of 36.5 with a student activity to good category.
Edge co-occurrences can account for rapid categorization of natural versus animal images
NASA Astrophysics Data System (ADS)
Perrinet, Laurent U.; Bednar, James A.
2015-06-01
Making a judgment about the semantic category of a visual scene, such as whether it contains an animal, is typically assumed to involve high-level associative brain areas. Previous explanations require progressively analyzing the scene hierarchically at increasing levels of abstraction, from edge extraction to mid-level object recognition and then object categorization. Here we show that the statistics of edge co-occurrences alone are sufficient to perform a rough yet robust (translation, scale, and rotation invariant) scene categorization. We first extracted the edges from images using a scale-space analysis coupled with a sparse coding algorithm. We then computed the “association field” for different categories (natural, man-made, or containing an animal) by computing the statistics of edge co-occurrences. These differed strongly, with animal images having more curved configurations. We show that this geometry alone is sufficient for categorization, and that the pattern of errors made by humans is consistent with this procedure. Because these statistics could be measured as early as the primary visual cortex, the results challenge widely held assumptions about the flow of computations in the visual system. The results also suggest new algorithms for image classification and signal processing that exploit correlations between low-level structure and the underlying semantic category.
Coventry, Kenny R; Christophel, Thomas B; Fehr, Thorsten; Valdés-Conroy, Berenice; Herrmann, Manfred
2013-08-01
When looking at static visual images, people often exhibit mental animation, anticipating visual events that have not yet happened. But what determines when mental animation occurs? Measuring mental animation using localized brain function (visual motion processing in the middle temporal and middle superior temporal areas, MT+), we demonstrated that animating static pictures of objects is dependent both on the functionally relevant spatial arrangement that objects have with one another (e.g., a bottle above a glass vs. a glass above a bottle) and on the linguistic judgment to be made about those objects (e.g., "Is the bottle above the glass?" vs. "Is the bottle bigger than the glass?"). Furthermore, we showed that mental animation is driven by functional relations and language separately in the right hemisphere of the brain but conjointly in the left hemisphere. Mental animation is not a unitary construct; the predictions humans make about the visual world are driven flexibly, with hemispheric asymmetry in the routes to MT+ activation.
Using VMD - An Introductory Tutorial
Hsin, Jen; Arkhipov, Anton; Yin, Ying; Stone, John E.; Schulten, Klaus
2010-01-01
VMD (Visual Molecular Dynamics) is a molecular visualization and analysis program designed for biological systems such as proteins, nucleic acids, lipid bilayer assemblies, etc. This unit will serve as an introductory VMD tutorial. We will present several step-by-step examples of some of VMD’s most popular features, including visualizing molecules in three dimensions with different drawing and coloring methods, rendering publication-quality figures, animate and analyze the trajectory of a molecular dynamics simulation, scripting in the text-based Tcl/Tk interface, and analyzing both sequence and structure data for proteins. PMID:19085979
Occlusion-free animation of driving routes for car navigation systems.
Takahashi, Shigeo; Yoshida, Kenichi; Shimada, Kenji; Nishita, Tomoyuki
2006-01-01
This paper presents a method for occlusion-free animation of geographical landmarks, and its application to a new type of car navigation system in which driving routes of interest are always visible. This is achieved by animating a nonperspective image where geographical landmarks such as mountain tops and roads are rendered as if they are seen from different viewpoints. The technical contribution of this paper lies in formulating the nonperspective terrain navigation as an inverse problem of continuously deforming a 3D terrain surface from the 2D screen arrangement of its associated geographical landmarks. The present approach provides a perceptually reasonable compromise between the navigation clarity and visual realism where the corresponding nonperspective view is fully augmented by assigning appropriate textures and shading effects to the terrain surface according to its geometry. An eye tracking experiment is conducted to prove that the present approach actually exhibits visually-pleasing navigation frames while users can clearly recognize the shape of the driving route without occlusion, together with the spatial configuration of geographical landmarks in its neighborhood.
NASA Astrophysics Data System (ADS)
Bichisao, Marta; Stallone, Angela
2017-04-01
Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.
Workshop on Molecular Animation
Bromberg, Sarina; Chiu, Wah; Ferrin, Thomas E.
2011-01-01
Summary February 25–26, 2010, in San Francisco, the Resource for Biocomputing, Visualization and Informatics (RBVI) and the National Center for Macromolecular Imaging (NCMI) hosted a molecular animation workshop for 21 structural biologists, molecular animators, and creators of molecular visualization software. Molecular animation aims to visualize scientific understanding of biomolecular processes and structures. The primary goal of the workshop was to identify the necessary tools for: producing high quality molecular animations, understanding complex molecular and cellular structures, creating publication supplementary materials and conference presentations, and teaching science to students and the public. Another use of molecular animation emerged in the workshop: helping to focus scientific inquiry about the motions of molecules and enhancing informal communication within and between laboratories. PMID:20947014
ANIMATION AND VISUALIZATION OF WATER QUALITY IN DISTRIBUTION SYSTEMS
Water may undergo a number of changes in the distribution system, making the quality of the water at the customer's tap different from the quality of the water that leaves the treatment plant. Such changes in quality may be caused by chemical or biological variations or by a loss...
NASA Astrophysics Data System (ADS)
Whitford, Dennis J.
2002-05-01
Ocean waves are the most recognized phenomena in oceanography. Unfortunately, undergraduate study of ocean wave dynamics and forecasting involves mathematics and physics and therefore can pose difficulties with some students because of the subject's interrelated dependence on time and space. Verbal descriptions and two-dimensional illustrations are often insufficient for student comprehension. Computer-generated visualization and animation offer a visually intuitive and pedagogically sound medium to present geoscience, yet there are very few oceanographic examples. A two-part article series is offered to explain ocean wave forecasting using computer-generated visualization and animation. This paper, Part 1, addresses forecasting of sea wave conditions and serves as the basis for the more difficult topic of swell wave forecasting addressed in Part 2. Computer-aided visualization and animation, accompanied by oral explanation, are a welcome pedagogical supplement to more traditional methods of instruction. In this article, several MATLAB ® software programs have been written to visualize and animate development and comparison of wave spectra, wave interference, and forecasting of sea conditions. These programs also set the stage for the more advanced and difficult animation topics in Part 2. The programs are user-friendly, interactive, easy to modify, and developed as instructional tools. By using these software programs, teachers can enhance their instruction of these topics with colorful visualizations and animation without requiring an extensive background in computer programming.
Visual control of prey-capture flight in dragonflies.
Olberg, Robert M
2012-04-01
Interacting with a moving object poses a computational problem for an animal's nervous system. This problem has been elegantly solved by the dragonfly, a formidable visual predator on flying insects. The dragonfly computes an interception flight trajectory and steers to maintain it during its prey-pursuit flight. This review summarizes current knowledge about pursuit behavior and neurons thought to control interception in the dragonfly. When understood, this system has the potential for explaining how a small group of neurons can control complex interactions with moving objects. Copyright © 2011 Elsevier Ltd. All rights reserved.
Visualizing request-flow comparison to aid performance diagnosis in distributed systems.
Sambasivan, Raja R; Shafer, Ilari; Mazurek, Michelle L; Ganger, Gregory R
2013-12-01
Distributed systems are complex to develop and administer, and performance problem diagnosis is particularly challenging. When performance degrades, the problem might be in any of the system's many components or could be a result of poor interactions among them. Recent research efforts have created tools that automatically localize the problem to a small number of potential culprits, but research is needed to understand what visualization techniques work best for helping distributed systems developers understand and explore their results. This paper compares the relative merits of three well-known visualization approaches (side-by-side, diff, and animation) in the context of presenting the results of one proven automated localization technique called request-flow comparison. Via a 26-person user study, which included real distributed systems developers, we identify the unique benefits that each approach provides for different problem types and usage modes.
NASA Astrophysics Data System (ADS)
Anken, Ralf; Hilbig, Reinhard; Knie, Miriam; Weigele, Jochen; Anken, Ralf
We have shown earlier that some fish of a given batch reveal motion sickness (a kinetosis) at the transition from earth gravity to diminished gravity. The percentual ratios of the various types of behaviour (normal swimming and kinetotic swimming; kinetotic specimens revealed looping responses or spinning movements) highly differed depending on the quality of diminished gravity. At high quality microgravity (HQM, 10-6 g, ZARM drop-tower, Bremen, Germany), kinetoses were exhibited by some 90% of the animals, whereas kinetoses were not as frequently seen at higher G-levels (at 0.03-0.05g during parabolic aircraft flights or during centrifugation in the drop-capsule, only some 15-25% of the animals show kinetoses). In the course of the present study, we further assessed the role of the visual system in maintaining postural control under HQM, when the remaining level of gravity is too low to be used as a vestibular cue. Therefore, larval cichlid fish siblings (Oreochromis mossambicus) were subjected to drop-tower flights at HQM and different kinds of illumination were used. Applying blue light (which leads to an increase of the sensitivity of the visual system and to a general arousal of the animal) resulted in a decrease of kinetotically swimming specimens as compared to white and red light (red light is almost invisible for fish). The final data as well as results from analyses of inner ear otoliths will be communicated at the meeting. We expect that the few fish, which swam normally under white or red light, will have a very low otolith asymmetry (differences in the size of the right versus the left otoliths). Asymmetry may be considerably higher in animals swimming normally under blue light, since these specimens are presumed to rely entirely on visual input; an otolith asymmetry will thus not lead to a computation of erroneous vestibular cues. Acknowledgement: This work was financially supported by the German Aerospace Center (DLR) (FKZ: 50 WB 0527). The excellent technical assistance of Sandra Schroer is highly appreciated.
Visually guided tube thoracostomy insertion comparison to standard of care in a large animal model.
Hernandez, Matthew C; Vogelsang, David; Anderson, Jeff R; Thiels, Cornelius A; Beilman, Gregory; Zielinski, Martin D; Aho, Johnathon M
2017-04-01
Tube thoracostomy (TT) is a lifesaving procedure for a variety of thoracic pathologies. The most commonly utilized method for placement involves open dissection and blind insertion. Image guided placement is commonly utilized but is limited by an inability to see distal placement location. Unfortunately, TT is not without complications. We aim to demonstrate the feasibility of a disposable device to allow for visually directed TT placement compared to the standard of care in a large animal model. Three swine were sequentially orotracheally intubated and anesthetized. TT was conducted utilizing a novel visualization device, tube thoracostomy visual trocar (TTVT) and standard of care (open technique). Position of the TT in the chest cavity were recorded using direct thoracoscopic inspection and radiographic imaging with the operator blinded to results. Complications were evaluated using a validated complication grading system. Standard descriptive statistical analyses were performed. Thirty TT were placed, 15 using TTVT technique, 15 using standard of care open technique. All of the TT placed using TTVT were without complication and in optimal position. Conversely, 27% of TT placed using standard of care open technique resulted in complications. Necropsy revealed no injury to intrathoracic organs. Visual directed TT placement using TTVT is feasible and non-inferior to the standard of care in a large animal model. This improvement in instrumentation has the potential to greatly improve the safety of TT. Further study in humans is required. Therapeutic Level II. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visualization Tools for Teaching Computer Security
ERIC Educational Resources Information Center
Yuan, Xiaohong; Vega, Percy; Qadah, Yaseen; Archer, Ricky; Yu, Huiming; Xu, Jinsheng
2010-01-01
Using animated visualization tools has been an important teaching approach in computer science education. We have developed three visualization and animation tools that demonstrate various information security concepts and actively engage learners. The information security concepts illustrated include: packet sniffer and related computer network…
Perception of biological motion from size-invariant body representations.
Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E
2015-01-01
The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.
The effect of integration masking on visual processing in perceptual categorization.
Hélie, Sébastien
2017-08-01
Learning to recognize and categorize objects is an essential cognitive skill allowing animals to function in the world. However, animals rarely have access to a canonical view of an object in an uncluttered environment. Hence, it is essential to study categorization under noisy, degraded conditions. In this article, we explore how the brain processes categorization stimuli in low signal-to-noise conditions using multivariate pattern analysis. We used an integration masking paradigm with mask opacity of 50%, 60%, and 70% inside a magnetic resonance imaging scanner. The results show that mask opacity affects blood-oxygen-level dependent (BOLD) signal in visual processing areas (V1, V2, V3, and V4) but does not affect the BOLD signal in brain areas traditionally associated with categorization (prefrontal cortex, striatum, hippocampus). This suggests that when a stimulus is difficult to extract from its background (e.g., low signal-to-noise ratio), the visual system extracts the stimulus and that activity in areas typically associated with categorization are not affected by the difficulty level of the visual conditions. We conclude with implications of this result for research on visual attention, categorization, and the integration of these fields. Copyright © 2017 Elsevier Inc. All rights reserved.
Ensminger, Amanda L; Fernández-Juricic, Esteban
2014-01-01
Between-individual variation has been documented in a wide variety of taxa, especially for behavioral characteristics; however, intra-population variation in sensory systems has not received similar attention in wild animals. We measured a key trait of the visual system, the density of retinal cone photoreceptors, in a wild population of house sparrows (Passer domesticus). We tested whether individuals differed from each other in cone densities given within-individual variation across the retina and across eyes. We further tested whether the existing variation could lead to individual differences in two aspects of perception: visual resolution and chromatic contrast. We found consistent between-individual variation in the densities of all five types of avian cones, involved in chromatic and achromatic vision. Using perceptual modeling, we found that this degree of variation translated into significant between-individual differences in visual resolution and the chromatic contrast of a plumage signal that has been associated with mate choice and agonistic interactions. However, there was no evidence for a relationship between individual visual resolution and chromatic contrast. The implication is that some birds may have the sensory potential to perform "better" in certain visual tasks, but not necessarily in both resolution and contrast simultaneously. Overall, our findings (a) highlight the need to consider multiple individuals when characterizing sensory traits of a species, and (b) provide some mechanistic basis for between-individual variation in different behaviors (i.e., animal personalities) and for testing the predictions of several widely accepted hypotheses (e.g., honest signaling).
Ensminger, Amanda L.; Fernández-Juricic, Esteban
2014-01-01
Between-individual variation has been documented in a wide variety of taxa, especially for behavioral characteristics; however, intra-population variation in sensory systems has not received similar attention in wild animals. We measured a key trait of the visual system, the density of retinal cone photoreceptors, in a wild population of house sparrows (Passer domesticus). We tested whether individuals differed from each other in cone densities given within-individual variation across the retina and across eyes. We further tested whether the existing variation could lead to individual differences in two aspects of perception: visual resolution and chromatic contrast. We found consistent between-individual variation in the densities of all five types of avian cones, involved in chromatic and achromatic vision. Using perceptual modeling, we found that this degree of variation translated into significant between-individual differences in visual resolution and the chromatic contrast of a plumage signal that has been associated with mate choice and agonistic interactions. However, there was no evidence for a relationship between individual visual resolution and chromatic contrast. The implication is that some birds may have the sensory potential to perform “better” in certain visual tasks, but not necessarily in both resolution and contrast simultaneously. Overall, our findings (a) highlight the need to consider multiple individuals when characterizing sensory traits of a species, and (b) provide some mechanistic basis for between-individual variation in different behaviors (i.e., animal personalities) and for testing the predictions of several widely accepted hypotheses (e.g., honest signaling). PMID:25372039
9 CFR 318.301 - Containers and closures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Section 318.301 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... examinations for rigid containers (cans)—(1) Visual examinations. A closure technician shall visually examine... container shall be examined for product leakage or obvious defects. A visual examination shall be performed...
LaZerte, Stefanie E; Reudink, Matthew W; Otter, Ken A; Kusack, Jackson; Bailey, Jacob M; Woolverton, Austin; Paetkau, Mark; de Jong, Adriaan; Hill, David J
2017-10-01
Radio frequency identification (RFID) provides a simple and inexpensive approach for examining the movements of tagged animals, which can provide information on species behavior and ecology, such as habitat/resource use and social interactions. In addition, tracking animal movements is appealing to naturalists, citizen scientists, and the general public and thus represents a tool for public engagement in science and science education. Although a useful tool, the large amount of data collected using RFID may quickly become overwhelming. Here, we present an R package (feedr) we have developed for loading, transforming, and visualizing time-stamped, georeferenced data, such as RFID data collected from static logger stations. Using our package, data can be transformed from raw RFID data to visits, presence (regular detections by a logger over time), movements between loggers, displacements, and activity patterns. In addition, we provide several conversion functions to allow users to format data for use in functions from other complementary R packages. Data can also be visualized through static or interactive maps or as animations over time. To increase accessibility, data can be transformed and visualized either through R directly, or through the companion site: http://animalnexus.ca, an online, user-friendly, R-based Shiny Web application. This system can be used by professional and citizen scientists alike to view and study animal movements. We have designed this package to be flexible and to be able to handle data collected from other stationary sources (e.g., hair traps, static very high frequency (VHF) telemetry loggers, observations of marked individuals in colonies or staging sites), and we hope this framework will become a meeting point for science, education, and community awareness of the movements of animals. We aim to inspire citizen engagement while simultaneously enabling robust scientific analysis.
Using perceptual rules in interactive visualization
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Treinish, Lloyd A.
1994-05-01
In visualization, data are represented as variations in grayscale, hue, shape, and texture. They can be mapped to lines, surfaces, and glyphs, and can be represented statically or in animation. In modem visualization systems, the choices for representing data seem unlimited. This is both a blessing and a curse, however, since the visual impression created by the visualization depends critically on which dimensions are selected for representing the data (Bertin, 1967; Tufte, 1983; Cleveland, 1991). In modem visualization systems, the user can interactively select many different mapping and representation operations, and can interactively select processing operations (e.g., applying a color map), realization operations (e.g., generating geometric structures such as contours or streamlines), and rendering operations (e.g., shading or ray-tracing). The user can, for example, map data to a color map, then apply contour lines, then shift the viewing angle, then change the color map again, etc. In many systems, the user can vary the choices for each operation, selecting, for example, particular color maps, contour characteristics, and shading techniques. The hope is that this process will eventually converge on a visual representation which expresses the structure of the data and effectively communicates its message in a way that meets the user's goals. Sometimes, however, it results in visual representations which are confusing, misleading, and garish.
An overview of 3D software visualization.
Teyseyre, Alfredo R; Campo, Marcelo R
2009-01-01
Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions.
Electrooptical model of the first retina layers of a visual analyzer
NASA Technical Reports Server (NTRS)
Chibalashvili, Y. L.; Riabinin, A. D.; Svechnikov, S. V.; Chibalashvili, Y. L.; Shkvar, A. M.
1979-01-01
An electrooptical principle of converting and transmitting optical signals is proposed and used as the basis for constructing a model of the upper layers of the retina of the visual analyzer of animals. An evaluation of multichannel fibrous optical systems, in which the conversion of optical signals is based on the electrooptical principle, to model the upper retina layers is presented. The symbolic circuit of the model and its algorithm are discussed.
Proximal versus distal cue utilization in spatial navigation: the role of visual acuity?
Carman, Heidi M; Mactutus, Charles F
2002-09-01
Proximal versus distal cue use in the Morris water maze is a widely accepted strategy for the dissociation of various problems affecting spatial navigation in rats such as aging, head trauma, lesions, and pharmacological or hormonal agents. Of the limited number of ontogenetic rat studies conducted, the majority have approached the problem of preweanling spatial navigation through a similar proximal-distal dissociation. An implicit assumption among all of these studies has been that the animal's visual system is sufficient to permit robust spatial navigation. We challenged this assumption and have addressed the role of visual acuity in spatial navigation in the preweanling Fischer 344-N rat by training animals to locate a visible (proximal) or hidden (distal) platform using double or null extramaze cues within the testing environment. All pups demonstrated improved performance across training, but animals presented with a visible platform, regardless of extramaze cues, simultaneously reached asymptotic performance levels; animals presented with a hidden platform, dependent upon location of extramaze cues, differentially reached asymptotic performance levels. Probe trial performance, defined by quadrant time and platform crossings, revealed that distal-double-cue pups demonstrated spatial navigational ability superior to that of the remaining groups. These results suggest that a pup's ability to spatially navigate a hidden platform is dependent on not only its response repertoire and task parameters, but also its visual acuity, as determined by the extramaze cue location within the testing environment. The standard hidden versus visible platform dissociation may not be a satisfactory strategy for the control of potential sensory deficits.
Vitol, Elina A.; Rozhkova, Elena A.; Rose, Volker; ...
2014-06-06
Temperature-responsive magnetic nanomicelles can serve as thermal energy and cargo carriers with controlled drug release functionality. In view of their potential biomedical applications, understanding the modes of interaction between nanomaterials and living systems and evaluation of efficiency of cargo delivery is of the utmost importance. In this paper, we investigate the interaction between the hybrid magnetic nanomicelles engineered for controlled platinum complex drug delivery and a biological system at three fundamental levels: subcellular compartments, a single cell and whole living animal. Nanomicelles with polymeric P(NIPAAm-co-AAm)-b-PCL core-shell were loaded with a hydrophobic Pt(IV) complex and Fe 3O 4 nanoparticles though self-assembly.more » The distribution of a platinum complex on subcellular level is visualized using hard X-ray fluorescence microscopy with unprecedented level of detail at sub-100 nm spatial resolution. We then study the cytotoxic effects of platinum complex-loaded micelles in vitro on a head and neck cancer cell culture model SQ20B. In conclusion, by employing the magnetic functionality of the micelles and additionally loading them with a near infrared fluorescent dye, we magnetically target them to a tumor site in a live animal xenografted model which allows to visualize their biodistribution in vivo.« less
A neural computational model for animal's time-to-collision estimation.
Wang, Ling; Yao, Dezhong
2013-04-17
The time-to-collision (TTC) is the time elapsed before a looming object hits the subject. An accurate estimation of TTC plays a critical role in the survival of animals in nature and acts as an important factor in artificial intelligence systems that depend on judging and avoiding potential dangers. The theoretic formula for TTC is 1/τ≈θ'/sin θ, where θ and θ' are the visual angle and its variation, respectively, and the widely used approximation computational model is θ'/θ. However, both of these measures are too complex to be implemented by a biological neuronal model. We propose a new simple computational model: 1/τ≈Mθ-P/(θ+Q)+N, where M, P, Q, and N are constants that depend on a predefined visual angle. This model, weighted summation of visual angle model (WSVAM), can achieve perfect implementation through a widely accepted biological neuronal model. WSVAM has additional merits, including a natural minimum consumption and simplicity. Thus, it yields a precise and neuronal-implemented estimation for TTC, which provides a simple and convenient implementation for artificial vision, and represents a potential visual brain mechanism.
Cha, Jaepyeong; Broch, Aline; Mudge, Scott; Kim, Kihoon; Namgoong, Jung-Man; Oh, Eugene; Kim, Peter
2018-01-01
Accurate, real-time identification and display of critical anatomic structures, such as the nerve and vasculature structures, are critical for reducing complications and improving surgical outcomes. Human vision is frequently limited in clearly distinguishing and contrasting these structures. We present a novel imaging system, which enables noninvasive visualization of critical anatomic structures during surgical dissection. Peripheral nerves are visualized by a snapshot polarimetry that calculates the anisotropic optical properties. Vascular structures, both venous and arterial, are identified and monitored in real-time using a near-infrared laser-speckle-contrast imaging. We evaluate the system by performing in vivo animal studies with qualitative comparison by contrast-agent-aided fluorescence imaging. PMID:29541506
Visual and acoustic communication in non-human animals: a comparison.
Rosenthal, G G; Ryan, M J
2000-09-01
The visual and auditory systems are two major sensory modalities employed in communication. Although communication in these two sensory modalities can serve analogous functions and evolve in response to similar selection forces, the two systems also operate under different constraints imposed by the environment and the degree to which these sensory modalities are recruited for non-communication functions. Also, the research traditions in each tend to differ, with studies of mechanisms of acoustic communication tending to take a more reductionist tack often concentrating on single signal parameters, and studies of visual communication tending to be more concerned with multivariate signal arrays in natural environments and higher level processing of such signals. Each research tradition would benefit by being more expansive in its approach.
Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).
Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen
2018-06-06
Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.
Techniques for animation of CFD results. [computational fluid dynamics
NASA Technical Reports Server (NTRS)
Horowitz, Jay; Hanson, Jeffery C.
1992-01-01
Video animation is becoming increasingly vital to the computational fluid dynamics researcher, not just for presentation, but for recording and comparing dynamic visualizations that are beyond the current capabilities of even the most powerful graphic workstation. To meet these needs, Lewis Research Center has recently established a facility to provide users with easy access to advanced video animation capabilities. However, producing animation that is both visually effective and scientifically accurate involves various technological and aesthetic considerations that must be understood both by the researcher and those supporting the visualization process. These considerations include: scan conversion, color conversion, and spatial ambiguities.
New animal models to study the role of tyrosinase in normal retinal development.
Lavado, Alfonso; Montoliu, Lluis
2006-01-01
Albino animals display a hypopigmented phenotype associated with several visual abnormalities, including rod photoreceptor cell deficits, abnormal patterns of connections between the eye and the brain and a general underdevelopment of central retina. Oculocutaneous albinism type I, a common form of albinism, is caused by mutations in the tyrosinase gene. In mice, the albino phenotype can be corrected by functional tyrosinase transgenes. Tyrosinase transgenic animals not only show normal pigmentation but the correction of all visual abnormalities associated with albinism, confirming a role of tyrosinase, a key enzyme in melanin biosynthesis, in normal retinal development. Here, we will discuss recent work carried out with new tyrosinase transgenic mouse models, to further analyse the role of tyrosinase in retinal development. We will first report a transgenic model with inducible tyrosinase expression that has been used to address the regulated activation of this gene and its associated effects on the development of the visual system. Second, we will comment on an interesting yeast artificial chromosome (YAC)-tyrosinase transgene, lacking important regulatory elements, that has highlighted the significance of local interactions between the retinal pigment epithelium (RPE) and developing neural retina.
Towards a high sensitivity small animal PET system based on CZT detectors (Conference Presentation)
NASA Astrophysics Data System (ADS)
Abbaszadeh, Shiva; Levin, Craig
2017-03-01
Small animal positron emission tomography (PET) is a biological imaging technology that allows non-invasive interrogation of internal molecular and cellular processes and mechanisms of disease. New PET molecular probes with high specificity are under development to target, detect, visualize, and quantify subtle molecular and cellular processes associated with cancer, heart disease, and neurological disorders. However, the limited uptake of these targeted probes leads to significant reduction in signal. There is a need to advance the performance of small animal PET system technology to reach its full potential for molecular imaging. Our goal is to assemble a small animal PET system based on CZT detectors and to explore methods to enhance its photon sensitivity. In this work, we reconstruct an image from a phantom using a two-panel subsystem consisting of six CZT crystals in each panel. For image reconstruction, coincidence events with energy between 450 and 570 keV were included. We are developing an algorithm to improve sensitivity of the system by including multiple interaction events.
Pupillometry reveals the physiological underpinnings of the aversion to holes.
Ayzenberg, Vladislav; Hickey, Meghan R; Lourenco, Stella F
2018-01-01
An unusual, but common, aversion to images with clusters of holes is known as trypophobia. Recent research suggests that trypophobic reactions are caused by visual spectral properties also present in aversive images of evolutionary threatening animals (e.g., snakes and spiders). However, despite similar spectral properties, it remains unknown whether there is a shared emotional response to holes and threatening animals. Whereas snakes and spiders are known to elicit a fear reaction, associated with the sympathetic nervous system, anecdotal reports from self-described trypophobes suggest reactions more consistent with disgust, which is associated with activation of the parasympathetic nervous system. Here we used pupillometry in a novel attempt to uncover the distinct emotional response associated with a trypophobic response to holes. Across two experiments, images of holes elicited greater constriction compared to images of threatening animals and neutral images. Moreover, this effect held when controlling for level of arousal and accounting for the pupil grating response. This pattern of pupillary response is consistent with involvement of the parasympathetic nervous system and suggests a disgust, not a fear, response to images of holes. Although general aversion may be rooted in shared visual-spectral properties, we propose that the specific emotion is determined by cognitive appraisal of the distinct image content.
Pupillometry reveals the physiological underpinnings of the aversion to holes
Hickey, Meghan R.
2018-01-01
An unusual, but common, aversion to images with clusters of holes is known as trypophobia. Recent research suggests that trypophobic reactions are caused by visual spectral properties also present in aversive images of evolutionary threatening animals (e.g., snakes and spiders). However, despite similar spectral properties, it remains unknown whether there is a shared emotional response to holes and threatening animals. Whereas snakes and spiders are known to elicit a fear reaction, associated with the sympathetic nervous system, anecdotal reports from self-described trypophobes suggest reactions more consistent with disgust, which is associated with activation of the parasympathetic nervous system. Here we used pupillometry in a novel attempt to uncover the distinct emotional response associated with a trypophobic response to holes. Across two experiments, images of holes elicited greater constriction compared to images of threatening animals and neutral images. Moreover, this effect held when controlling for level of arousal and accounting for the pupil grating response. This pattern of pupillary response is consistent with involvement of the parasympathetic nervous system and suggests a disgust, not a fear, response to images of holes. Although general aversion may be rooted in shared visual-spectral properties, we propose that the specific emotion is determined by cognitive appraisal of the distinct image content. PMID:29312818
The case from animal studies for balanced binocular treatment strategies for human amblyopia.
Mitchell, Donald E; Duffy, Kevin R
2014-03-01
Although amblyopia typically manifests itself as a monocular condition, its origin has long been linked to unbalanced neural signals from the two eyes during early postnatal development, a view confirmed by studies conducted on animal models in the last 50 years. Despite recognition of its binocular origin, treatment of amblyopia continues to be dominated by a period of patching of the non-amblyopic eye that necessarily hinders binocular co-operation. This review summarizes evidence from three lines of investigation conducted on an animal model of deprivation amblyopia to support the thesis that treatment of amblyopia should instead focus upon procedures that promote and enhance binocular co-operation. First, experiments with mixed daily visual experience in which episodes of abnormal visual input were pitted against normal binocular exposure revealed that short exposures of the latter offset much longer periods of abnormal input to allow normal development of visual acuity in both eyes. Second, experiments on the use of part-time patching revealed that purposeful introduction of episodes of binocular vision each day could be very beneficial. Periods of binocular exposure that represented 30-50% of the daily visual exposure included with daily occlusion of the non-amblyopic could allow recovery of normal vision in the amblyopic eye. Third, very recent experiments demonstrate that a short 10 day period of total darkness can promote very fast and complete recovery of visual acuity in the amblyopic eye of kittens and may represent an example of a class of artificial environments that have similar beneficial effects. Finally, an approach is described to allow timing of events in kitten and human visual system development to be scaled to optimize the ages for therapeutic interventions. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
Visual Processing in Rapid-Chase Systems: Image Processing, Attention, and Awareness
Schmidt, Thomas; Haberkamp, Anke; Veltkamp, G. Marina; Weber, Andreas; Seydell-Greenwald, Anna; Schmidt, Filipp
2011-01-01
Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed toward target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. (1) When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria). (2) Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. (3) Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. (4) When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that “fast” visuomotor measures predominantly driven by feedforward processing should supplement “slow” psychophysical measures predominantly based on visual awareness. PMID:21811484
a Web-Based Platform for Visualizing Spatiotemporal Dynamics of Big Taxi Data
NASA Astrophysics Data System (ADS)
Xiong, H.; Chen, L.; Gui, Z.
2017-09-01
With more and more vehicles equipped with Global Positioning System (GPS), access to large-scale taxi trajectory data has become increasingly easy. Taxis are valuable sensors and information associated with taxi trajectory can provide unprecedented insight into many aspects of city life. But analysing these data presents many challenges. Visualization of taxi data is an efficient way to represent its distributions and structures and reveal hidden patterns in the data. However, Most of the existing visualization systems have some shortcomings. On the one hand, the passenger loading status and speed information cannot be expressed. On the other hand, mono-visualization form limits the information presentation. In view of these problems, this paper designs and implements a visualization system in which we use colour and shape to indicate passenger loading status and speed information and integrate various forms of taxi visualization. The main work as follows: 1. Pre-processing and storing the taxi data into MongoDB database. 2. Visualization of hotspots for taxi pickup points. Through DBSCAN clustering algorithm, we cluster the extracted taxi passenger's pickup locations to produce passenger hotspots. 3. Visualizing the dynamic of taxi moving trajectory using interactive animation. We use a thinning algorithm to reduce the amount of data and design a preloading strategyto load the data smoothly. Colour and shape are used to visualize the taxi trajectory data.
Bennicelli, Jeannette; Wright, John Fraser; Komaromy, Andras; Jacobs, Jonathan B; Hauck, Bernd; Zelenaia, Olga; Mingozzi, Federico; Hui, Daniel; Chung, Daniel; Rex, Tonia S; Wei, Zhangyong; Qu, Guang; Zhou, Shangzhen; Zeiss, Caroline; Arruda, Valder R; Acland, Gregory M; Dell'Osso, Lou F; High, Katherine A; Maguire, Albert M; Bennett, Jean
2008-03-01
We evaluated the safety and efficacy of an optimized adeno-associated virus (AAV; AAV2.RPE65) in animal models of the RPE65 form of Leber congenital amaurosis (LCA). Protein expression was optimized by addition of a modified Kozak sequence at the translational start site of hRPE65. Modifications in AAV production and delivery included use of a long stuffer sequence to prevent reverse packaging from the AAV inverted-terminal repeats, and co-injection with a surfactant. The latter allows consistent and predictable delivery of a given dose of vector. We observed improved electroretinograms (ERGs) and visual acuity in Rpe65 mutant mice. This has not been reported previously using AAV2 vectors. Subretinal delivery of 8.25 x 10(10) vector genomes in affected dogs was well tolerated both locally and systemically, and treated animals showed improved visual behavior and pupillary responses, and reduced nystagmus within 2 weeks of injection. ERG responses confirmed the reversal of visual deficit. Immunohistochemistry confirmed transduction of retinal pigment epithelium cells and there was minimal toxicity to the retina as judged by histopathologic analysis. The data demonstrate that AAV2.RPE65 delivers the RPE65 transgene efficiently and quickly to the appropriate target cells in vivo in animal models. This vector holds great promise for treatment of LCA due to RPE65 mutations.
Bennicelli, Jeannette; Wright, John Fraser; Komaromy, Andras; Jacobs, Jonathan B; Hauck, Bernd; Zelenaia, Olga; Mingozzi, Federico; Hui, Daniel; Chung, Daniel; Rex, Tonia S; Wei, Zhangyong; Qu, Guang; Zhou, Shangzhen; Zeiss, Caroline; Arruda, Valder R; Acland, Gregory M; Dell’Osso, Lou F; High, Katherine A; Maguire, Albert M; Bennett, Jean
2010-01-01
We evaluated the safety and efficacy of an optimized adeno-associated virus (AAV; AAV2.RPE65) in animal models of the RPE65 form of Leber congenital amaurosis (LCA). Protein expression was optimized by addition of a modified Kozak sequence at the translational start site of hRPE65. Modifications in AAV production and delivery included use of a long stuffer sequence to prevent reverse packaging from the AAV inverted-terminal repeats, and co-injection with a surfactant. The latter allows consistent and predictable delivery of a given dose of vector. We observed improved electroretinograms (ERGs) and visual acuity in Rpe65 mutant mice. This has not been reported previously using AAV2 vectors. Subretinal delivery of 8.25 × 1010 vector genomes in affected dogs was well tolerated both locally and systemically, and treated animals showed improved visual behavior and pupillary responses, and reduced nystagmus within 2 weeks of injection. ERG responses confirmed the reversal of visual deficit. Immunohistochemistry confirmed transduction of retinal pigment epithelium cells and there was minimal toxicity to the retina as judged by histopathologic analysis. The data demonstrate that AAV2.RPE65 delivers the RPE65 transgene efficiently and quickly to the appropriate target cells in vivo in animal models. This vector holds great promise for treatment of LCA due to RPE65 mutations. PMID:18209734
Animate and Inanimate Objects in Human Visual Cortex: Evidence for Task-Independent Category Effects
ERIC Educational Resources Information Center
Wiggett, Alison J.; Pritchard, Iwan C.; Downing, Paul E.
2009-01-01
Evidence from neuropsychology suggests that the distinction between animate and inanimate kinds is fundamental to human cognition. Previous neuroimaging studies have reported that viewing animate objects activates ventrolateral visual brain regions, whereas inanimate objects activate ventromedial regions. However, these studies have typically…
3D laser optoacoustic ultrasonic imaging system for preclinical research
NASA Astrophysics Data System (ADS)
Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.
2013-03-01
In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).
Povinelli, Daniel J; Dunphy-Lelii, Sarah; Reaux, James E; Mazza, Michael P
2002-01-01
We present the results of 5 experiments that assessed 7 chimpanzees' understanding of the visual experiences of others. The research was conducted when the animals were adolescents (7-8 years of age) and adults (12 years of age). The experiments examined their ability to recognize the equivalence between visual and tactile modes of gaining the attention of others (Exp. 1), their understanding that the vision of others can be impeded by opaque barriers (Exps. 2 and 5), and their ability to distinguish between postural cues which are and are not specifically relevant to visual attention (Exps. 3 and 4). The results suggest that although chimpanzees are excellent at exploiting the observable contingencies that exist between the facial and bodily postures of other agents on the one hand, and events in the world on the other, these animals may not construe others as possessing psychological states related to 'seeing' or 'attention.' Humans and chimpanzees share homologous suites of psychological systems that detect and process information about both the static and dynamic aspects of social life, but humans alone may possess systems which interpret behavior in terms of abstract, unobservable mental states such as seeing and attention. Copyright 2002 S. Karger AG, Basel
Strabismus and the Oculomotor System: Insights from Macaque Models
Das, Vallabh E.
2017-01-01
Disrupting binocular vision in infancy leads to strabismus and oftentimes to a variety of associated visual sensory deficits and oculomotor abnormalities. Investigation of this disorder has been aided by the development of various animal models, each of which has advantages and disadvantages. In comparison to studies of binocular visual responses in cortical structures, investigations of neural oculomotor structures that mediate the misalignment and abnormalities of eye movements have been more recent, and these studies have shown that different brain areas are intimately involved in driving several aspects of the strabismic condition, including horizontal misalignment, dissociated deviations, A and V patterns of strabismus, disconjugate eye movements, nystagmus, and fixation switch. The responses of cells in visual and oculomotor areas that potentially drive the sensory deficits and also eye alignment and eye movement abnormalities follow a general theme of disrupted calibration, lower sensitivity, and poorer specificity compared with the normally developed visual oculomotor system. PMID:28532347
High-quality animation of 2D steady vector fields.
Lefer, Wilfrid; Jobard, Bruno; Leduc, Claire
2004-01-01
Simulators for dynamic systems are now widely used in various application areas and raise the need for effective and accurate flow visualization techniques. Animation allows us to depict direction, orientation, and velocity of a vector field accurately. This paper extends a former proposal for a new approach to produce perfectly cyclic and variable-speed animations for 2D steady vector fields (see [1] and [2]). A complete animation of an arbitrary number of frames is encoded in a single image. The animation can be played using the color table animation technique, which is very effective even on low-end workstations. A cyclic set of textures can be produced as well and then encoded in a common animation format or used for texture mapping on 3D objects. As compared to other approaches, the method presented in this paper produces smoother animations and is more effective, both in memory requirements to store the animation, and in computation time.
Video-signal synchronizes registration of visual evoked responses.
Vít, F; Kuba, M; Kremlácek, J; Kubová, Z; Horevaj, M
1996-01-01
Autodesk Animator software offers the suitable technique for visual stimulation in the registration of visual evoked responses (VERs). However, it is not possible to generate pulses that are synchronous with the animated sequences on any output port of the computer. These pulses are necessary for the synchronization of the computer that makes the registration of the VERs. The principle of the circuit is presented that is able to provide the synchronization of the analyzer with the stimulation computer using Autodesk Animator software.
Animal Preparations to Assess Neurophysiological Effects of Bio-Dynamic Environments.
1980-07-17
deprivation in preventing the acquisition of visually-guided behaviors. The next study examined acquisition of visually-guided behaviors in six animals...Maffei, L. and Bisti, S. Binocular interaction in strabismic kittens deprived of vision. Science, 191, 579-580, 1976. Matin, L. A possible hybrid...function in cat visual cortex following prolonged deprivation . Exp. Brain Res., 25 (1976) 139-156. Hein, A. Visually controlled components of movement
ERIC Educational Resources Information Center
Johnson, A. M.; Ozogul, G.; Reisslein, M.
2015-01-01
An experiment examined the effects of visual signalling to relevant information in multiple external representations and the visual presence of an animated pedagogical agent (APA). Students learned electric circuit analysis using a computer-based learning environment that included Cartesian graphs, equations and electric circuit diagrams. The…
ERIC Educational Resources Information Center
Hew, Soon-Hin; Ohki, Mitsuru
2004-01-01
This study examines the effectiveness of imagery and electronic visual feedback in facilitating students' acquisition of Japanese pronunciation skills. The independent variables, animated graphic annotation (AGA) and immediate visual feedback (IVF) were integrated into a Japanese computer-assisted language learning (JCALL) program focused on the…
Modeling DNA structure and processes through animation and kinesthetic visualizations
NASA Astrophysics Data System (ADS)
Hager, Christine
There have been many studies regarding the effectiveness of visual aids that go beyond that of static illustrations. Many of these have been concentrated on the effectiveness of visual aids such as animations and models or even non-traditional visual aid activities like role-playing activities. This study focuses on the effectiveness of three different types of visual aids: models, animation, and a role-playing activity. Students used a modeling kit made of Styrofoam balls and toothpicks to construct nucleotides and then bond nucleotides together to form DNA. Next, students created their own animation to depict the processes of DNA replication, transcription, and translation. Finally, students worked in teams to build proteins while acting out the process of translation. Students were given a pre- and post-test that measured their knowledge and comprehension of the four topics mentioned above. Results show that there was a significant gain in the post-test scores when compared to the pre-test scores. This indicates that the incorporated visual aids were effective methods for teaching DNA structure and processes.
Visual landmark-directed scatter-hoarding of Siberian chipmunks Tamias sibiricus.
Zhang, Dongyuan; Li, Jia; Wang, Zhenyu; Yi, Xianfeng
2016-05-01
Spatial memory of cached food items plays an important role in cache recovery by scatter-hoarding animals. However, whether scatter-hoarding animals intentionally select cache sites with respect to visual landmarks in the environment and then rely on them to recover their cached seeds for later use has not been extensively explored. Furthermore, there is a lack of evidence on whether there are sex differences in visual landmark-based food-hoarding behaviors in small rodents even though male and female animals exhibit different spatial abilities. In the present study, we used a scatter-hoarding animal, the Siberian chipmunk, Tamias sibiricus to explore these questions in semi-natural enclosures. Our results showed that T. sibiricus preferred to establish caches in the shallow pits labeled with visual landmarks (branches of Pinus sylvestris, leaves of Athyrium brevifrons and PVC tubes). In addition, visual landmarks of P. sylvestris facilitated cache recovery by T. sibiricus. We also found significant sex differences in visual landmark-based food-hoarding strategies in Siberian chipmunks. Males, rather than females, chipmunks tended to establish their caches with respect to the visual landmarks. Our studies show that T. sibiricus rely on visual landmarks to establish and recover their caches, and that sex differences exist in visual landmark-based food hoarding in Siberian chipmunks. © 2015 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
Teaching AI Search Algorithms in a Web-Based Educational System
ERIC Educational Resources Information Center
Grivokostopoulou, Foteini; Hatzilygeroudis, Ioannis
2013-01-01
In this paper, we present a way of teaching AI search algorithms in a web-based adaptive educational system. Teaching is based on interactive examples and exercises. Interactive examples, which use visualized animations to present AI search algorithms in a step-by-step way with explanations, are used to make learning more attractive. Practice…
Virtual reality and 3D animation in forensic visualization.
Ma, Minhua; Zheng, Huiru; Lallie, Harjinder
2010-09-01
Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.
Characteristics of visual fatigue under the effect of 3D animation.
Chang, Yu-Shuo; Hsueh, Ya-Hsin; Tung, Kwong-Chung; Jhou, Fong-Yi; Lin, David Pei-Cheng
2015-01-01
Visual fatigue is commonly encountered in modern life. Clinical visual fatigue characteristics caused by 2-D and 3-D animations may be different, but have not been characterized in detail. This study tried to distinguish the differential effects on visual fatigue caused by 2-D and 3-D animations. A total of 23 volunteers were subjected to accommodation and vergence assessments, followed by a 40-min video game program designed to aggravate their asthenopic symptoms. The volunteers were then assessed for accommodation and vergence parameters again and directed to watch a 5-min 3-D video program, and then assessed again for the parameters. The results support that the 3-D animations caused similar characteristics in vision fatigue parameters in some specific aspects as compared to that caused by 2-D animations. Furthermore, 3-D animations may lead to more exhaustion in both ciliary and extra-ocular muscles, and such differential effects were more evident in the high demand of near vision work. The current results indicated that an arbitrary set of indexes may be promoted in the design of 3-D display or equipments.
User Centered, Application Independent Visualization of National Airspace Data
NASA Technical Reports Server (NTRS)
Murphy, James R.; Hinton, Susan E.
2011-01-01
This paper describes an application independent software tool, IV4D, built to visualize animated and still 3D National Airspace System (NAS) data specifically for aeronautics engineers who research aggregate, as well as single, flight efficiencies and behavior. IV4D was origin ally developed in a joint effort between the National Aeronautics and Space Administration (NASA) and the Air Force Research Laboratory (A FRL) to support the visualization of air traffic data from the Airspa ce Concept Evaluation System (ACES) simulation program. The three mai n challenges tackled by IV4D developers were: 1) determining how to d istill multiple NASA data formats into a few minimal dataset types; 2 ) creating an environment, consisting of a user interface, heuristic algorithms, and retained metadata, that facilitates easy setup and fa st visualization; and 3) maximizing the user?s ability to utilize the extended range of visualization available with AFRL?s existing 3D te chnologies. IV4D is currently being used by air traffic management re searchers at NASA?s Ames and Langley Research Centers to support data visualizations.
How, Martin J; Porter, Megan L; Radford, Andrew N; Feller, Kathryn D; Temple, Shelby E; Caldwell, Roy L; Marshall, N Justin; Cronin, Thomas W; Roberts, Nicholas W
2014-10-01
The polarization of light provides information that is used by many animals for a number of different visually guided behaviours. Several marine species, such as stomatopod crustaceans and cephalopod molluscs, communicate using visual signals that contain polarized information, content that is often part of a more complex multi-dimensional visual signal. In this work, we investigate the evolution of polarized signals in species of Haptosquilla, a widespread genus of stomatopod, as well as related protosquillids. We present evidence for a pre-existing bias towards horizontally polarized signal content and demonstrate that the properties of the polarization vision system in these animals increase the signal-to-noise ratio of the signal. Combining these results with the increase in efficacy that polarization provides over intensity and hue in a shallow marine environment, we propose a joint framework for the evolution of the polarized form of these complex signals based on both efficacy-driven (proximate) and content-driven (ultimate) selection pressures. © 2014. Published by The Company of Biologists Ltd.
Marcelli, Fabienne; Escher, Pascal; Schorderet, Daniel F
2012-09-01
The mouse has emerged as an animal model for many diseases. At IRO, we have used this animal to understand the development of many eye diseases and treatment of some of them. Precise evaluation of vision is a prerequisite for both these approaches. In this unit we describe three ways to measure vision: testing the optokinetic response, and evaluating the fundus by direct observation and by fluorescent angiography. Curr. Protoc. Mouse Biol. 2:207-218 © 2012 by John Wiley & Sons, Inc. Copyright © 2012 John Wiley & Sons, Inc.
Two takes on the social brain: a comparison of theory of mind tasks.
Gobbini, Maria Ida; Koralek, Aaron C; Bryan, Ronald E; Montgomery, Kimberly J; Haxby, James V
2007-11-01
We compared two tasks that are widely used in research on mentalizing--false belief stories and animations of rigid geometric shapes that depict social interactions--to investigate whether the neural systems that mediate the representation of others' mental states are consistent across these tasks. Whereas false belief stories activated primarily the anterior paracingulate cortex (APC), the posterior cingulate cortex/precuneus (PCC/PC), and the temporo-parietal junction (TPJ)--components of the distributed neural system for theory of mind (ToM)--the social animations activated an extensive region along nearly the full extent of the superior temporal sulcus, including a locus in the posterior superior temporal sulcus (pSTS), as well as the frontal operculum and inferior parietal lobule (IPL)--components of the distributed neural system for action understanding--and the fusiform gyrus. These results suggest that the representation of covert mental states that may predict behavior and the representation of intentions that are implied by perceived actions involve distinct neural systems. These results show that the TPJ and the pSTS play dissociable roles in mentalizing and are parts of different distributed neural systems. Because the social animations do not depict articulated body movements, these results also highlight that the perception of the kinematics of actions is not necessary to activate the mirror neuron system, suggesting that this system plays a general role in the representation of intentions and goals of actions. Furthermore, these results suggest that the fusiform gyrus plays a general role in the representation of visual stimuli that signify agency, independent of visual form.
Fleishman, Leo J.; Loew, Ellis R.; Whiting, Martin J.
2011-01-01
Progress in developing animal communication theory is frequently constrained by a poor understanding of sensory systems. For example, while lizards have been the focus of numerous studies in visual signalling, we only have data on the spectral sensitivities of a few species clustered in two major clades (Iguania and Gekkota). Using electroretinography and microspectrophotometry, we studied the visual system of the cordylid lizard Platysaurus broadleyi because it represents an unstudied clade (Scinciformata) with respect to visual systems and because UV signals feature prominently in its social behaviour. The retina possessed four classes of single and one class of double cones. Sensitivity in the ultraviolet region (UV) was approximately three times higher than previously reported for other lizards. We found more colourless oil droplets (associated with UV-sensitive (UVS) and short wavelength-sensitive (SWS) photoreceptors), suggesting that the increased sensitivity was owing to the presence of more UVS photoreceptors. Using the Vorobyev–Osorio colour discrimination model, we demonstrated that an increase in the number of UVS photoreceptors significantly enhances a lizard's ability to discriminate conspecific male throat colours. Visual systems in diurnal lizards appear to be broadly conserved, but data from additional clades are needed to confirm this. PMID:21389031
Colour processing in complex environments: insights from the visual system of bees
Dyer, Adrian G.; Paulk, Angelique C.; Reser, David H.
2011-01-01
Colour vision enables animals to detect and discriminate differences in chromatic cues independent of brightness. How the bee visual system manages this task is of interest for understanding information processing in miniaturized systems, as well as the relationship between bee pollinators and flowering plants. Bees can quickly discriminate dissimilar colours, but can also slowly learn to discriminate very similar colours, raising the question as to how the visual system can support this, or whether it is simply a learning and memory operation. We discuss the detailed neuroanatomical layout of the brain, identify probable brain areas for colour processing, and suggest that there may be multiple systems in the bee brain that mediate either coarse or fine colour discrimination ability in a manner dependent upon individual experience. These multiple colour pathways have been identified along both functional and anatomical lines in the bee brain, providing us with some insights into how the brain may operate to support complex colour discrimination behaviours. PMID:21147796
Microstimulation with Chronically Implanted Intracortical Electrodes
NASA Astrophysics Data System (ADS)
McCreery, Douglas
Stimulating microelectrodes that penetrate into the brain afford a means of accessing the basic functional units of the central nervous system. Microstimulation in the region of the cerebral cortex that subserve vision may be an alternative, or an adjunct, to a retinal prosthesis, and may be particularly attractive as a means of restoring a semblance of high-resolution central vision. There also is the intriguing possibility that such a prosthesis could convey higher order visual percepts, many of which are mediated by neural circuits in the secondary or "extra-striate" visual areas that surround the primary visual cortex. The technologies of intracortical stimulating microelectrodes and investigations of the effects of microstimulation on neural tissue have advanced to the point where a cortical-level prosthesis is at least feasible. The imperative of protecting neural tissue from stimulation-induced damage imposes constraints on the selection of stimulus parameters, as does the requirement that the stimulation not greatly affect the electrical excitability of the neurons that are to be activated. The latter is especially likely to occur when many adjacent microelectrodes are pulsed, as will be necessary in a visual prosthesis. However, data from animal studies indicates that these restrictions on stimulus parameter are compatible with those that can evoke visual percepts in humans and in experimental animals. These findings give cause to be optimistic about the prospects for realizing a visual prosthesis utilizing intracortical microstimulation.
Burmann, Britta; Dehnhardt, Guido; Mauck, Björn
2005-01-01
Mental rotation is a widely accepted concept indicating an image-like mental representation of visual information and an analogue mode of information processing in certain visuospatial tasks. In the task of discriminating between image and mirror-image of rotated figures, human reaction times increase with the angular disparity between the figures. In animals, tests of this kind yield inconsistent results. Pigeons were found to use a time-independent rotational invariance, possibly indicating a non-analogue information processing system that evolved in response to the horizontal plane of reference birds perceive during flight. Despite similar ecological demands concerning the visual reference plane, a sea lion was found to use mental rotation in similar tasks, but its processing speed while rotating three-dimensional stimuli seemed to depend on the axis of rotation in a different way than found for humans in similar tasks. If ecological demands influence the way information processing systems evolve, hominids might have secondarily lost the ability of rotational invariance while retreating from arboreal living and evolving an upright gait in which the vertical reference plane is more important. We therefore conducted mental rotation experiments with an arboreal living primate species, the lion-tailed macaque. Performing a two-alternative matching-to-sample procedure, the animal had to decide between rotated figures representing image and mirror-image of a previously shown upright sample. Although non-rotated stimuli were recognized faster than rotated ones, the animal's mean reaction times did not clearly increase with the angle of rotation. These results are inconsistent with the mental rotation concept but also cannot be explained assuming a mere rotational invariance. Our study thus seems to support the idea of information processing systems evolving gradually in response to specific ecological demands.
How semantic category modulates preschool children's visual memory.
Giganti, Fiorenza; Viggiano, Maria Pia
2015-01-01
The dynamic interplay between perception and memory has been explored in preschool children by presenting filtered stimuli regarding animals and artifacts. The identification of filtered images was markedly influenced by both prior exposure and the semantic nature of the stimuli. The identification of animals required less physical information than artifacts did. Our results corroborate the notion that the human attention system evolves to reliably develop definite category-specific selection criteria by which living entities are monitored in different ways.
Colour, vision and coevolution in avian brood parasitism.
Stoddard, Mary Caswell; Hauber, Mark E
2017-07-05
The coevolutionary interactions between avian brood parasites and their hosts provide a powerful system for investigating the diversity of animal coloration. Specifically, reciprocal selection pressure applied by hosts and brood parasites can give rise to novel forms and functions of animal coloration, which largely differ from those that arise when selection is imposed by predators or mates. In the study of animal colours, avian brood parasite-host dynamics therefore invite special consideration. Rapid advances across disciplines have paved the way for an integrative study of colour and vision in brood parasite-host systems. We now know that visually driven host defences and host life history have selected for a suite of phenotypic adaptations in parasites, including mimicry, crypsis and supernormal stimuli. This sometimes leads to vision-based host counter-adaptations and increased parasite trickery. Here, we review vision-based adaptations that arise in parasite-host interactions, emphasizing that these adaptations can be visual/sensory, cognitive or phenotypic in nature. We highlight recent breakthroughs in chemistry, genomics, neuroscience and computer vision, and we conclude by identifying important future directions. Moving forward, it will be essential to identify the genetic and neural bases of adaptation and to compare vision-based adaptations to those arising in other sensory modalities.This article is part of the themed issue 'Animal coloration: production, perception, function and application'. © 2017 The Author(s).
Applying Strategic Visualization(Registered Trademark) to Lunar and Planetary Mission Design
NASA Technical Reports Server (NTRS)
Frassanito, John R.; Cooke, D. R.
2002-01-01
NASA teams, such as the NASA Exploration Team (NEXT), utilize advanced computational visualization processes to develop mission designs and architectures for lunar and planetary missions. One such process, Strategic Visualization (trademark), is a tool used extensively to help mission designers visualize various design alternatives and present them to other participants of their team. The participants, which may include NASA, industry, and the academic community, are distributed within a virtual network. Consequently, computer animation and other digital techniques provide an efficient means to communicate top-level technical information among team members. Today,Strategic Visualization(trademark) is used extensively both in the mission design process within the technical community, and to communicate the value of space exploration to the general public. Movies and digital images have been generated and shown on nationally broadcast television and the Internet, as well as in magazines and digital media. In our presentation will show excerpts of a computer-generated animation depicting the reference Earth/Moon L1 Libration Point Gateway architecture. The Gateway serves as a staging corridor for human expeditions to the lunar poles and other surface locations. Also shown are crew transfer systems and current reference lunar excursion vehicles as well as the Human and robotic construction of an inflatable telescope array for deployment to the Sun/Earth Libration Point.
Fluoxetine increases plasticity and modulates the proteomic profile in the adult mouse visual cortex
Ruiz-Perera, L.; Muniz, M.; Vierci, G.; Bornia, N.; Baroncelli, L.; Sale, A.; Rossi, F.M.
2015-01-01
The scarce functional recovery of the adult CNS following injuries or diseases is largely due to its reduced potential for plasticity, the ability to reorganize neural connections as a function of experience. Recently, some new strategies restoring high levels of plasticity in the adult brain have been identified, especially in the paradigmatic model of the visual system. A chronic treatment with the anti-depressant fluoxetine reinstates plasticity in the adult rat primary visual cortex, inducing recovery of vision in amblyopic animals. The molecular mechanisms underlying this effect remain largely unknown. Here, we explored fluoxetine effects on mouse visual cortical plasticity, and exploited a proteomic approach to identify possible candidates mediating the outcome of the antidepressant treatment on adult cortical plasticity. We showed that fluoxetine restores ocular dominance plasticity in the adult mouse visual cortex, and identified 31 differentially expressed protein spots in fluoxetine-treated animals vs. controls. MALDITOF/TOF mass spectrometry identification followed by bioinformatics analysis revealed that these proteins are involved in the control of cytoskeleton organization, endocytosis, molecular transport, intracellular signaling, redox cellular state, metabolism and protein degradation. Altogether, these results indicate a complex effect of fluoxetine on neuronal signaling mechanisms potentially involved in restoring plasticity in the adult brain. PMID:26205348
Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi
2016-10-12
Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal's retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals.
Can, Wang; Zhuoran, Zhao; Zheng, Jin
2017-04-01
In the past 10 years, thousands of people have claimed to be affected by trypophobia, which is the fear of objects with small holes. Recent research suggests that people do not fear the holes; rather, images of clustered holes, which share basic visual characteristics with venomous organisms, lead to nonconscious fear. In the present study, both self-reported measures and the Preschool Single Category Implicit Association Test were adapted for use with preschoolers to investigate whether discomfort related to trypophobic stimuli was grounded in their visual features or based on a nonconsciously associated fear of venomous animals. The results indicated that trypophobic stimuli were associated with discomfort in children. This discomfort seemed to be related to the typical visual characteristics and pattern properties of trypophobic stimuli rather than to nonconscious associations with venomous animals. The association between trypophobic stimuli and venomous animals vanished when the typical visual characteristics of trypophobic features were removed from colored photos of venomous animals. Thus, the discomfort felt toward trypophobic images might be an instinctive response to their visual characteristics rather than the result of a learned but nonconscious association with venomous animals. Therefore, it is questionable whether it is justified to legitimize trypophobia.
High fidelity simulations of infrared imagery with animated characters
NASA Astrophysics Data System (ADS)
Näsström, F.; Persson, A.; Bergström, D.; Berggren, J.; Hedström, J.; Allvar, J.; Karlsson, M.
2012-06-01
High fidelity simulations of IR signatures and imagery tend to be slow and do not have effective support for animation of characters. Simplified rendering methods based on computer graphics methods can be used to overcome these limitations. This paper presents a method to combine these tools and produce simulated high fidelity thermal IR data of animated people in terrain. Infrared signatures for human characters have been calculated using RadThermIR. To handle multiple character models, these calculations use a simplified material model for the anatomy and clothing. Weather and temperature conditions match the IR-texture used in the terrain model. The calculated signatures are applied to the animated 3D characters that, together with the terrain model, are used to produce high fidelity IR imagery of people or crowds. For high level animation control and crowd simulations, HLAS (High Level Animation System) has been developed. There are tools available to create and visualize skeleton based animations, but tools that allow control of the animated characters on a higher level, e.g. for crowd simulation, are usually expensive and closed source. We need the flexibility of HLAS to add animation into an HLA enabled sensor system simulation framework.
Fox, Jessica L.; Aptekar, Jacob W.; Zolotova, Nadezhda M.; Shoemaker, Patrick A.; Frye, Mark A.
2014-01-01
The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input. PMID:24198267
JWFront: Wavefronts and Light Cones for Kerr Spacetimes
NASA Astrophysics Data System (ADS)
Frutos Alfaro, Francisco; Grave, Frank; Müller, Thomas; Adis, Daria
2015-04-01
JWFront visualizes wavefronts and light cones in general relativity. The interactive front-end allows users to enter the initial position values and choose the values for mass and angular momentum per unit mass. The wavefront animations are available in 2D and 3D; the light cones are visualized using the coordinate systems (t, x, y) or (t, z, x). JWFront can be easily modified to simulate wavefronts and light cones for other spacetime by providing the Christoffel symbols in the program.
Sontag, Jennah M; Barnes, Spencer R
2017-09-26
Visual framing can improve health-message effectiveness. Narrative structure provides a template needed for determining how to frame visuals to maximise message effectiveness. Participants (N = 190) were assigned to a message condition determined by segments (establisher, initial, peak), graphic (static, animated) and cancer (lung, melanoma). ANOVAs revealed that melanoma was more believable than lung cancer with static graphics at the establisher and peak; narratives were more believable with animated graphics at the peak segment; melanoma elicited greater positive attitudes; graphics in the peak influenced greatest intentions. Animated graphics visually framed to emphasise information at the establisher and peak segments suggest maximum effectiveness.
Sensory system plasticity in a visually specialized, nocturnal spider.
Stafstrom, Jay A; Michalik, Peter; Hebets, Eileen A
2017-04-21
The interplay between an animal's environmental niche and its behavior can influence the evolutionary form and function of its sensory systems. While intraspecific variation in sensory systems has been documented across distant taxa, fewer studies have investigated how changes in behavior might relate to plasticity in sensory systems across developmental time. To investigate the relationships among behavior, peripheral sensory structures, and central processing regions in the brain, we take advantage of a dramatic within-species shift of behavior in a nocturnal, net-casting spider (Deinopis spinosa), where males cease visually-mediated foraging upon maturation. We compared eye diameters and brain region volumes across sex and life stage, the latter through micro-computed X-ray tomography. We show that mature males possess altered peripheral visual morphology when compared to their juvenile counterparts, as well as juvenile and mature females. Matching peripheral sensory structure modifications, we uncovered differences in relative investment in both lower-order and higher-order processing regions in the brain responsible for visual processing. Our study provides evidence for sensory system plasticity when individuals dramatically change behavior across life stages, uncovering new avenues of inquiry focusing on altered reliance of specific sensory information when entering a new behavioral niche.
Vugler, Anthony A; Coffey, Peter J
2003-11-01
The retinae of dystrophic Royal College of Surgeons (RCS) rats exhibit progressive photoreceptor degeneration accompanied by pathology of ganglion cells. To date, little work has examined the consequences of retinal degeneration for central visual structures in dystrophic rats. Here, we use immunohistochemistry for calretinin (CR) to label retinal afferents in the superior colliculus (SC), lateral geniculate nucleus, and olivary pretectal nucleus of RCS rats aged between 2 and 26 months of age. Early indications of fiber loss in the medial dystrophic SC were apparent between 9 and 13 months. Quantitative methods reveal a significant reduction in the level of CR immunoreactivity in visual layers of the medial dystrophic SC at 13 months (P < 0.02). In dystrophic animals aged 19-26 months the loss of CR fibers in SC was dramatic, with well-defined patches of fiber degeneration predominating in medial aspects of the structure. This fiber degeneration in SC was accompanied by increased detection of cells immunoreactive for CR. In several animals, regions of fiber loss were also found to contain strongly parvalbumin-immunoreactive cells. Loss of CR fibers was also observed in the lateral geniculate nucleus and olivary pretectal nucleus. Patterns of fiber loss in the dystrophic SC compliment reports of ganglion cell degeneration in these animals and the response of collicular neurons to degeneration is discussed in terms of plasticity of the dystrophic visual system and properties of calcium binding proteins.
Direct visualization of hemolymph flow in the heart of a grasshopper (Schistocerca americana)
Lee, Wah-Keat; Socha, John J
2009-01-01
Background Hemolymph flow patterns in opaque insects have never been directly visualized due to the lack of an appropriate imaging technique. The required spatial and temporal resolutions, together with the lack of contrast between the hemolymph and the surrounding soft tissue, are major challenges. Previously, indirect techniques have been used to infer insect heart motion and hemolymph flow, but such methods fail to reveal fine-scale kinematics of heartbeat and details of intra-heart flow patterns. Results With the use of microbubbles as high contrast tracer particles, we directly visualized hemolymph flow in a grasshopper (Schistocerca americana) using synchrotron x-ray phase-contrast imaging. In-vivo intra-heart flow patterns and the relationship between respiratory (tracheae and air sacs) and circulatory (heart) systems were directly observed for the first time. Conclusion Synchrotron x-ray phase contrast imaging is the only generally applicable technique that has the necessary spatial, temporal resolutions and sensitivity to directly visualize heart dynamics and flow patterns inside opaque animals. This technique has the potential to illuminate many long-standing questions regarding small animal circulation, encompassing topics such as retrograde heart flow in some insects and the development of flow in embryonic vertebrates. PMID:19272159
Taborsky, Michael; Villa, Fabienne; Frommen, Joachim G.
2017-01-01
Abstract Visual signals, including changes in coloration and color patterns, are frequently used by animals to convey information. During contests, body coloration and its changes can be used to assess an opponent’s state or motivation. Communication of aggressive propensity is particularly important in group‐living animals with a stable dominance hierarchy, as the outcome of aggressive interactions determines the social rank of group members. Neolamprologus pulcher is a cooperatively breeding cichlid showing frequent within-group aggression. Both sexes exhibit two vertical black stripes on the operculum that vary naturally in shape and darkness. During frontal threat displays these patterns are actively exposed to the opponent, suggesting a signaling function. To investigate the role of operculum stripes during contests we manipulated their darkness in computer animated pictures of the fish. We recorded the responses in behavior and stripe darkness of test subjects to which these animated pictures were presented. Individuals with initially darker stripes were more aggressive against the animations and showed more operculum threat displays. Operculum stripes of test subjects became darker after exposure to an animation exhibiting a pale operculum than after exposure to a dark operculum animation, highlighting the role of the darkness of this color pattern in opponent assessment. We conclude that (i) the black stripes on the operculum of N. pulcher are a reliable signal of aggression and dominance, (ii) these markings play an important role in opponent assessment, and (iii) 2D computer animations are well suited to elicit biologically meaningful short-term aggressive responses in this widely used model system of social evolution. PMID:29491962
Balzarini, Valentina; Taborsky, Michael; Villa, Fabienne; Frommen, Joachim G
2017-02-01
Visual signals, including changes in coloration and color patterns, are frequently used by animals to convey information. During contests, body coloration and its changes can be used to assess an opponent's state or motivation. Communication of aggressive propensity is particularly important in group-living animals with a stable dominance hierarchy, as the outcome of aggressive interactions determines the social rank of group members. Neolamprologus pulcher is a cooperatively breeding cichlid showing frequent within-group aggression. Both sexes exhibit two vertical black stripes on the operculum that vary naturally in shape and darkness. During frontal threat displays these patterns are actively exposed to the opponent, suggesting a signaling function. To investigate the role of operculum stripes during contests we manipulated their darkness in computer animated pictures of the fish. We recorded the responses in behavior and stripe darkness of test subjects to which these animated pictures were presented. Individuals with initially darker stripes were more aggressive against the animations and showed more operculum threat displays. Operculum stripes of test subjects became darker after exposure to an animation exhibiting a pale operculum than after exposure to a dark operculum animation, highlighting the role of the darkness of this color pattern in opponent assessment. We conclude that (i) the black stripes on the operculum of N. pulcher are a reliable signal of aggression and dominance, (ii) these markings play an important role in opponent assessment, and (iii) 2D computer animations are well suited to elicit biologically meaningful short-term aggressive responses in this widely used model system of social evolution.
Michael H. McClellan
2004-01-01
In the old-growth temperate rainforests of southeast Alaska, concerns over clearcutting effects on habitat, visual quality, slope stability, and biodiversity have created a demand for the use of other silvicultural systems. The forest vegetation and animal taxa of southeast Alaska appear to be well adapted to frequent, widespread, small-scale disturbance, suggesting...
Using web-based animations to teach histology.
Brisbourne, Marc A S; Chin, Susan S-L; Melnyk, Erica; Begg, David A
2002-02-15
We have been experimenting with the use of animations to teach histology as part of an interactive multimedia program we are developing to replace the traditional lecture/laboratory-based histology course in our medical and dental curricula. This program, called HistoQuest, uses animations to illustrate basic histologic principles, explain dynamic processes, integrate histologic structure with physiological function, and assist students in forming mental models with which to organize and integrate new information into their learning. With this article, we first briefly discuss the theory of mental modeling, principles of visual presentation, and how mental modeling and visual presentation can be integrated to create effective animations. We then discuss the major Web-based animation technologies that are currently available and their suitability for different visual styles and navigational structures. Finally, we describe the process we use to produce animations for our program. The approach described in this study can be used by other developers to create animations for delivery over the Internet for the teaching of histology.
The human mirror neuron system: A link between action observation and social skills
Pineda, Jaime A.; Ramachandran, Vilayanur S.
2007-01-01
The discovery of the mirror neuron system (MNS) has led researchers to speculate that this system evolved from an embodied visual recognition apparatus in monkey to a system critical for social skills in humans. It is accepted that the MNS is specialized for processing animate stimuli, although the degree to which social interaction modulates the firing of mirror neurons has not been investigated. In the current study, EEG mu wave suppression was used as an index of MNS activity. Data were collected while subjects viewed four videos: (1) Visual White Noise: baseline, (2) Non-interacting: three individuals tossed a ball up in the air to themselves, (3) Social Action, Spectator: three individuals tossed a ball to each other and (4) Social Action, Interactive: similar to video 3 except occasionally the ball would be thrown off the screen toward the viewer. The mu wave was modulated by the degree of social interaction, with the Non-interacting condition showing the least suppression, followed by the Social Action, Spectator condition and the Social Action, Interactive condition showing the most suppression. These data suggest that the human MNS is specialized not only for processing animate stimuli, but specifically stimuli with social relevance. PMID:18985120
Instrumentation in molecular imaging.
Wells, R Glenn
2016-12-01
In vivo molecular imaging is a challenging task and no single type of imaging system provides an ideal solution. Nuclear medicine techniques like SPECT and PET provide excellent sensitivity but have poor spatial resolution. Optical imaging has excellent sensitivity and spatial resolution, but light photons interact strongly with tissues and so only small animals and targets near the surface can be accurately visualized. CT and MRI have exquisite spatial resolution, but greatly reduced sensitivity. To overcome the limitations of individual modalities, molecular imaging systems often combine individual cameras together, for example, merging nuclear medicine cameras with CT or MRI to allow the visualization of molecular processes with both high sensitivity and high spatial resolution.
Woo, Kevin L; Rieucau, Guillaume; Burke, Darren
2017-02-01
Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus , a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards' ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species.
Induction of Social Behavior in Zebrafish: Live Versus Computer Animated Fish as Stimuli
Qin, Meiying; Wong, Albert; Seguin, Diane
2014-01-01
Abstract The zebrafish offers an excellent compromise between system complexity and practical simplicity and has been suggested as a translational research tool for the analysis of human brain disorders associated with abnormalities of social behavior. Unlike laboratory rodents zebrafish are diurnal, thus visual cues may be easily utilized in the analysis of their behavior and brain function. Visual cues, including the sight of conspecifics, have been employed to induce social behavior in zebrafish. However, the method of presentation of these cues and the question of whether computer animated images versus live stimulus fish have differential effects have not been systematically analyzed. Here, we compare the effects of five stimulus presentation types: live conspecifics in the experimental tank or outside the tank, playback of video-recorded live conspecifics, computer animated images of conspecifics presented by two software applications, the previously employed General Fish Animator, and a new application Zebrafish Presenter. We report that all stimuli were equally effective and induced a robust social response (shoaling) manifesting as reduced distance between stimulus and experimental fish. We conclude that presentation of live stimulus fish, or 3D images, is not required and 2D computer animated images are sufficient to induce robust and consistent social behavioral responses in zebrafish. PMID:24575942
Induction of social behavior in zebrafish: live versus computer animated fish as stimuli.
Qin, Meiying; Wong, Albert; Seguin, Diane; Gerlai, Robert
2014-06-01
The zebrafish offers an excellent compromise between system complexity and practical simplicity and has been suggested as a translational research tool for the analysis of human brain disorders associated with abnormalities of social behavior. Unlike laboratory rodents zebrafish are diurnal, thus visual cues may be easily utilized in the analysis of their behavior and brain function. Visual cues, including the sight of conspecifics, have been employed to induce social behavior in zebrafish. However, the method of presentation of these cues and the question of whether computer animated images versus live stimulus fish have differential effects have not been systematically analyzed. Here, we compare the effects of five stimulus presentation types: live conspecifics in the experimental tank or outside the tank, playback of video-recorded live conspecifics, computer animated images of conspecifics presented by two software applications, the previously employed General Fish Animator, and a new application Zebrafish Presenter. We report that all stimuli were equally effective and induced a robust social response (shoaling) manifesting as reduced distance between stimulus and experimental fish. We conclude that presentation of live stimulus fish, or 3D images, is not required and 2D computer animated images are sufficient to induce robust and consistent social behavioral responses in zebrafish.
Moy, Kyle; Li, Weiyu; Tran, Huu Phuoc; Simonis, Valerie; Story, Evan; Brandon, Christopher; Furst, Jacob; Raicu, Daniela; Kim, Hongkyun
2015-01-01
The nematode Caenorhabditis elegans provides a unique opportunity to interrogate the neural basis of behavior at single neuron resolution. In C. elegans, neural circuits that control behaviors can be formulated based on its complete neural connection map, and easily assessed by applying advanced genetic tools that allow for modulation in the activity of specific neurons. Importantly, C. elegans exhibits several elaborate behaviors that can be empirically quantified and analyzed, thus providing a means to assess the contribution of specific neural circuits to behavioral output. Particularly, locomotory behavior can be recorded and analyzed with computational and mathematical tools. Here, we describe a robust single worm-tracking system, which is based on the open-source Python programming language, and an analysis system, which implements path-related algorithms. Our tracking system was designed to accommodate worms that explore a large area with frequent turns and reversals at high speeds. As a proof of principle, we used our tracker to record the movements of wild-type animals that were freshly removed from abundant bacterial food, and determined how wild-type animals change locomotory behavior over a long period of time. Consistent with previous findings, we observed that wild-type animals show a transition from area-restricted local search to global search over time. Intriguingly, we found that wild-type animals initially exhibit short, random movements interrupted by infrequent long trajectories. This movement pattern often coincides with local/global search behavior, and visually resembles Lévy flight search, a search behavior conserved across species. Our mathematical analysis showed that while most of the animals exhibited Brownian walks, approximately 20% of the animals exhibited Lévy flights, indicating that C. elegans can use Lévy flights for efficient food search. In summary, our tracker and analysis software will help analyze the neural basis of the alteration and transition of C. elegans locomotory behavior in a food-deprived condition. PMID:26713869
How Animals Understand the Meaning of Indefinite Information from Environments?
NASA Astrophysics Data System (ADS)
Shimizu, H.; Yamaguchi, Y.
Animals, including human beings, have ability to understand the meaning of indefinite information from environments. Thanks to this ability the animals have flexibility in their behaviors for the environmental changes. Staring from a hypothesis that understanding of the input (Shannonian) information is based on the self-organization of a neuronal representation, that is, a spatio-temporal pattern constituted of coherent activities of neurons encoding a ``figure'', being separated from the ``background'' encoded by incoherent activities, the conditions necessary for the understanding of indefinite information were discussed. The crucial conditions revealed are that the neuronal system is incomplete or indefinite in a sense that its rules for the self-organization of the neuronal activities are completed only after the input of the environmental information and that it has an additional system named "self-specific to relevantly self-organize dynamical ``constraints'' or ``boundary conditions'' for the self-organization of the representation. For the simultaneous self-organizations of the relevant constraints and the representation, a global circulation of activities must be self-organized between these two kinds of neuronal systems. Moreover, for the performance of these functions, a specific kind of synergetic elements, ``holon elements'', are also necessary. By means of a neuronal model, the visual perception of indefinite input signals is demonstrated. The results obtained are consistent with those recently observed in the visual cortex of cats.
Ngo, Kathy T.; Andrade, Ingrid; Hartenstein, Volker
2018-01-01
Visual information processing in animals with large image forming eyes is carried out in highly structured retinotopically ordered neuropils. Visual neuropils in Drosophila form the optic lobe, which consists of four serially arranged major subdivisions; the lamina, medulla, lobula and lobula plate; the latter three of these are further subdivided into multiple layers. The visual neuropils are formed by more than 100 different cell types, distributed and interconnected in an invariant highly regular pattern. This pattern relies on a protracted sequence of developmental steps, whereby different cell types are born at specific time points and nerve connections are formed in a tightly controlled sequence that has to be coordinated among the different visual neuropils. The developing fly visual system has become a highly regarded and widely studied paradigm to investigate the genetic mechanisms that control the formation of neural circuits. However, these studies are often made difficult by the complex and shifting patterns in which different types of neurons and their connections are distributed throughout development. In the present paper we have reconstructed the three-dimensional architecture of the Drosophila optic lobe from the early larva to the adult. Based on specific markers, we were able to distinguish the populations of progenitors of the four optic neuropils and map the neurons and their connections. Our paper presents sets of annotated confocal z-projections and animated 3D digital models of these structures for representative stages. The data reveal the temporally coordinated growth of the optic neuropils, and clarify how the position and orientation of the neuropils and interconnecting tracts (inner and outer optic chiasm) changes over time. Finally, we have analyzed the emergence of the discrete layers of the medulla and lobula complex using the same markers (DN-cadherin, Brp) employed to systematically explore the structure and development of the central brain neuropil. Our work will facilitate experimental studies of the molecular mechanisms regulating neuronal fate and connectivity in the fly visual system, which bears many fundamental similarities with the retina of vertebrates. PMID:28533086
Surprising characteristics of visual systems of invertebrates.
González-Martín-Moro, J; Hernández-Verdejo, J L; Jiménez-Gahete, A E
2017-01-01
To communicate relevant and striking aspects about the visual system of some close invertebrates. Review of the related literature. The capacity of snails to regenerate a complete eye, the benefit of the oval shape of the compound eye of many flying insects as a way of stabilising the image during flight, the potential advantages related to the extreme refractive error that characterises the ocelli of many insects, as well as the ability to detect polarised light as a navigation system, are some of the surprising capabilities present in the small invertebrate eyes that are described in this work. The invertebrate eyes have capabilities and sensorial modalities that are not present in the human eye. The study of the eyes of these animals can help us to improve our understanding of our visual system, and inspire the development of optical devices. Copyright © 2016 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.
1986-06-01
Experiments-The Animal Model Plasticity in animals during a "critical period" has been well demonstrated by Hubel and Wiesel and many other authors. (23...the cortical cells are "utterly plastic". Hubel and Wiesel (1970) suggested an analagous critical period for man which could be signifi- cantly longer...in their ju- venile macaque monkeys, Hubel , Wiesel , and Levay (1977) noted a significant change in the ocular dominance columns in lay- er IV C of
Outsourcing Systems Development for e-Learning Applications
ERIC Educational Resources Information Center
Brodahl, Cornelia; Oftedahl, Heidi
2012-01-01
This study investigated outsourcing of the development of visual, animated and interactive learning objects for mathematics education by a Norwegian university to software vendors in China. It sought to understand the challenges in this outsourcing engagement and competences needed to meet the challenges. The authors tested outsourcing strategies…
Supranormal orientation selectivity of visual neurons in orientation-restricted animals.
Sasaki, Kota S; Kimura, Rui; Ninomiya, Taihei; Tabuchi, Yuka; Tanaka, Hiroki; Fukui, Masayuki; Asada, Yusuke C; Arai, Toshiya; Inagaki, Mikio; Nakazono, Takayuki; Baba, Mika; Kato, Daisuke; Nishimoto, Shinji; Sanada, Takahisa M; Tani, Toshiki; Imamura, Kazuyuki; Tanaka, Shigeru; Ohzawa, Izumi
2015-11-16
Altered sensory experience in early life often leads to remarkable adaptations so that humans and animals can make the best use of the available information in a particular environment. By restricting visual input to a limited range of orientations in young animals, this investigation shows that stimulus selectivity, e.g., the sharpness of tuning of single neurons in the primary visual cortex, is modified to match a particular environment. Specifically, neurons tuned to an experienced orientation in orientation-restricted animals show sharper orientation tuning than neurons in normal animals, whereas the opposite was true for neurons tuned to non-experienced orientations. This sharpened tuning appears to be due to elongated receptive fields. Our results demonstrate that restricted sensory experiences can sculpt the supranormal functions of single neurons tailored for a particular environment. The above findings, in addition to the minimal population response to orientations close to the experienced one, agree with the predictions of a sparse coding hypothesis in which information is represented efficiently by a small number of activated neurons. This suggests that early brain areas adopt an efficient strategy for coding information even when animals are raised in a severely limited visual environment where sensory inputs have an unnatural statistical structure.
Supranormal orientation selectivity of visual neurons in orientation-restricted animals
Sasaki, Kota S.; Kimura, Rui; Ninomiya, Taihei; Tabuchi, Yuka; Tanaka, Hiroki; Fukui, Masayuki; Asada, Yusuke C.; Arai, Toshiya; Inagaki, Mikio; Nakazono, Takayuki; Baba, Mika; Kato, Daisuke; Nishimoto, Shinji; Sanada, Takahisa M.; Tani, Toshiki; Imamura, Kazuyuki; Tanaka, Shigeru; Ohzawa, Izumi
2015-01-01
Altered sensory experience in early life often leads to remarkable adaptations so that humans and animals can make the best use of the available information in a particular environment. By restricting visual input to a limited range of orientations in young animals, this investigation shows that stimulus selectivity, e.g., the sharpness of tuning of single neurons in the primary visual cortex, is modified to match a particular environment. Specifically, neurons tuned to an experienced orientation in orientation-restricted animals show sharper orientation tuning than neurons in normal animals, whereas the opposite was true for neurons tuned to non-experienced orientations. This sharpened tuning appears to be due to elongated receptive fields. Our results demonstrate that restricted sensory experiences can sculpt the supranormal functions of single neurons tailored for a particular environment. The above findings, in addition to the minimal population response to orientations close to the experienced one, agree with the predictions of a sparse coding hypothesis in which information is represented efficiently by a small number of activated neurons. This suggests that early brain areas adopt an efficient strategy for coding information even when animals are raised in a severely limited visual environment where sensory inputs have an unnatural statistical structure. PMID:26567927
Flat lizard female mimics use sexual deception in visual but not chemical signals
Whiting, Martin J.; Webb, Jonathan K.; Keogh, J. Scott
2009-01-01
Understanding what constrains signalling and maintains signal honesty is a central theme in animal communication. Clear cases of dishonest signalling, and the conditions under which they are used, represent an important avenue for improved understanding of animal communication systems. Female mimicry, when certain males take on the appearance of females, is most commonly a male alternative reproductive tactic that is condition-dependent. A number of adaptive explanations for female mimicry have been proposed including avoiding the costs of aggression, gaining an advantage in combat, sneaking copulations with females on the territories of other males, gaining physiological benefits and minimizing the risk of predation. Previous studies of female mimicry have focused on a single mode of communication, although most animals communicate using multiple signals. Male Augrabies flat lizards adopt alternative reproductive tactics in which some males (she-males) mimic the visual appearance of females. We experimentally tested in a wild population whether she-males are able to mimic females using both visual and chemical signals. We tested chemical recognition in the field by removing scent and relabelling females and she-males with either male or female scent. At a distance, typical males (he-males) could not distinguish she-males from females using visual signals, but during close encounters, he-males correctly determined the gender of she-males using chemical signals. She-males are therefore able to deceive he-males using visual but not chemical signals. To effectively deceive he-males, she-males avoid close contact with he-males during which chemical cues would reveal their deceit. This strategy is probably adaptive, because he-males are aggressive and territorial; by mimicking females, she-males are able to move about freely and gain access to females on the territories of resident males. PMID:19324828
Visualizing complex (hydrological) systems with correlation matrices
NASA Astrophysics Data System (ADS)
Haas, J. C.
2016-12-01
When trying to understand or visualize the connections of different aspects of a complex system, this often requires deeper understanding to start with, or - in the case of geo data - complicated GIS software. To our knowledge, correlation matrices have rarely been used in hydrology (e.g. Stoll et al., 2011; van Loon and Laaha, 2015), yet they do provide an interesting option for data visualization and analysis. We present a simple, python based way - using a river catchment as an example - to visualize correlations and similarities in an easy and colorful way. We apply existing and easy to use python packages from various disciplines not necessarily linked to the Earth sciences and can thus quickly show how different aquifers work or react, and identify outliers, enabling this system to also be used for quality control of large datasets. Going beyond earlier work, we add a temporal and spatial element, enabling us to visualize how a system reacts to local phenomena such as for example a river, or changes over time, by visualizing the passing of time in an animated movie. References: van Loon, A.F., Laaha, G.: Hydrological drought severity explained by climate and catchment characteristics, Journal of Hydrology 526, 3-14, 2015, Drought processes, modeling, and mitigation Stoll, S., Hendricks Franssen, H. J., Barthel, R., Kinzelbach, W.: What can we learn from long-term groundwater data to improve climate change impact studies?, Hydrology and Earth System Sciences 15(12), 3861-3875, 2011
An autism-associated serotonin transporter variant disrupts multisensory processing.
Siemann, J K; Muller, C L; Forsberg, C G; Blakely, R D; Veenstra-VanderWeele, J; Wallace, M T
2017-03-21
Altered sensory processing is observed in many children with autism spectrum disorder (ASD), with growing evidence that these impairments extend to the integration of information across the different senses (that is, multisensory function). The serotonin system has an important role in sensory development and function, and alterations of serotonergic signaling have been suggested to have a role in ASD. A gain-of-function coding variant in the serotonin transporter (SERT) associates with sensory aversion in humans, and when expressed in mice produces traits associated with ASD, including disruptions in social and communicative function and repetitive behaviors. The current study set out to test whether these mice also exhibit changes in multisensory function when compared with wild-type (WT) animals on the same genetic background. Mice were trained to respond to auditory and visual stimuli independently before being tested under visual, auditory and paired audiovisual (multisensory) conditions. WT mice exhibited significant gains in response accuracy under audiovisual conditions. In contrast, although the SERT mutant animals learned the auditory and visual tasks comparably to WT littermates, they failed to show behavioral gains under multisensory conditions. We believe these results provide the first behavioral evidence of multisensory deficits in a genetic mouse model related to ASD and implicate the serotonin system in multisensory processing and in the multisensory changes seen in ASD.
Design and outcomes of an acoustic data visualization seminar.
Robinson, Philip W; Pätynen, Jukka; Haapaniemi, Aki; Kuusinen, Antti; Leskinen, Petri; Zan-Bi, Morley; Lokki, Tapio
2014-01-01
Recently, the Department of Media Technology at Aalto University offered a seminar entitled Applied Data Analysis and Visualization. The course used spatial impulse response measurements from concert halls as the context to explore high-dimensional data visualization methods. Students were encouraged to represent source and receiver positions, spatial aspects, and temporal development of sound fields, frequency characteristics, and comparisons between halls, using animations and interactive graphics. The primary learning objectives were for the students to translate their skills across disciplines and gain a working understanding of high-dimensional data visualization techniques. Accompanying files present examples of student-generated, animated and interactive visualizations.
Ishiwata, Ryosuke R; Morioka, Masaki S; Ogishima, Soichi; Tanaka, Hiroshi
2009-02-15
BioCichlid is a 3D visualization system of time-course microarray data on molecular networks, aiming at interpretation of gene expression data by transcriptional relationships based on the central dogma with physical and genetic interactions. BioCichlid visualizes both physical (protein) and genetic (regulatory) network layers, and provides animation of time-course gene expression data on the genetic network layer. Transcriptional regulations are represented to bridge the physical network (transcription factors) and genetic network (regulated genes) layers, thus integrating promoter analysis into the pathway mapping. BioCichlid enhances the interpretation of microarray data and allows for revealing the underlying mechanisms causing differential gene expressions. BioCichlid is freely available and can be accessed at http://newton.tmd.ac.jp/. Source codes for both biocichlid server and client are also available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerr, J.; Jones, G.L.
1996-01-01
Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerr, J.; Jones, G.L.
1996-12-31
Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less
Eye closure in darkness animates olfactory and gustatory cortical areas.
Wiesmann, M; Kopietz, R; Albrecht, J; Linn, J; Reime, U; Kara, E; Pollatos, O; Sakar, V; Anzinger, A; Fesl, G; Brückmann, H; Kobal, G; Stephan, T
2006-08-01
In two previous fMRI studies, it was reported that eyes-open and eyes-closed conditions in darkness had differential effects on brain activity, and typical patterns of cortical activity were identified. Without external stimulation, ocular motor and attentional systems were activated when the eyes were open. On the contrary, the visual, somatosensory, vestibular, and auditory systems were activated when the eyes were closed. In this study, we investigated whether cortical areas related to the olfactory and gustatory system are also animated by eye closure without any other external stimulation. In a first fMRI experiment (n = 22), we identified cortical areas including the piriform cortex activated by olfactory stimulation. In a second experiment (n = 12) subjects lying in darkness in the MRI scanner alternately opened and closed their eyes. In accordance to previous studies, we found activation clusters bilaterally in visual, somatosensory, vestibular and auditory cortical areas for the contrast eyes-closed vs. eyes-open. In addition, we were able to show that cortical areas related to the olfactory and gustatory system were also animated by eye closure. These results support the hypothesis that there are two different states of mental activity: with the eyes closed, an "interoceptive" state characterized by imagination and multisensory activity and with the eyes open, an "exteroceptive" state characterized by attention and ocular motor activity. Our study also suggests that the chosen baseline condition may have a considerable impact on activation patterns and on the interpretation of brain activation studies. This needs to be considered for studies of the olfactory and gustatory system.
GVS - GENERAL VISUALIZATION SYSTEM
NASA Technical Reports Server (NTRS)
Keith, S. R.
1994-01-01
The primary purpose of GVS (General Visualization System) is to support scientific visualization of data output by the panel method PMARC_12 (inventory number ARC-13362) on the Silicon Graphics Iris computer. GVS allows the user to view PMARC geometries and wakes as wire frames or as light shaded objects. Additionally, geometries can be color shaded according to phenomena such as pressure coefficient or velocity. Screen objects can be interactively translated and/or rotated to permit easy viewing. Keyframe animation is also available for studying unsteady cases. The purpose of scientific visualization is to allow the investigator to gain insight into the phenomena they are examining, therefore GVS emphasizes analysis, not artistic quality. GVS uses existing IRIX 4.0 image processing tools to allow for conversion of SGI RGB files to other formats. GVS is a self-contained program which contains all the necessary interfaces to control interaction with PMARC data. This includes 1) the GVS Tool Box, which supports color histogram analysis, lighting control, rendering control, animation, and positioning, 2) GVS on-line help, which allows the user to access control elements and get information about each control simultaneously, and 3) a limited set of basic GVS data conversion filters, which allows for the display of data requiring simpler data formats. Specialized controls for handling PMARC data include animation and wakes, and visualization of off-body scan volumes. GVS is written in C-language for use on SGI Iris series computers running IRIX. It requires 28Mb of RAM for execution. Two separate hardcopy documents are available for GVS. The basic document price for ARC-13361 includes only the GVS User's Manual, which outlines major features of the program and provides a tutorial on using GVS with PMARC_12 data. Programmers interested in modifying GVS for use with data in formats other than PMARC_12 format may purchase a copy of the draft GVS 3.1 Software Maintenance Manual separately, if desired, for $26. An electronic copy of the User's Manual, in Macintosh Word format, is included on the distribution media. Purchasers of GVS are advised that changes and extensions to GVS are made at their own risk. In addition, GVS includes an on-line help system and sample input files. The standard distribution medium for GVS is a .25 inch streaming magnetic tape cartridge in IRIX tar format. GVS was developed in 1992.
Chlorophyll derivatives enhance invertebrate red-light and ultraviolet phototaxis.
Degl'Innocenti, Andrea; Rossi, Leonardo; Salvetti, Alessandra; Marino, Attilio; Meloni, Gabriella; Mazzolai, Barbara; Ciofani, Gianni
2017-06-13
Chlorophyll derivatives are known to enhance vision in vertebrates. They are thought to bind visual pigments (i.e., opsins apoproteins bound to retinal chromophores) directly within the retina. Consistent with previous findings in vertebrates, here we show that chlorin e 6 - a chlorophyll derivative - enhances photophobicity in a flatworm (Dugesia japonica), specifically when exposed to UV radiation (λ = 405 nm) or red light (λ = 660 nm). This is the first report of chlorophyll derivatives acting as modulators of invertebrate phototaxis, and in general the first account demonstrating that they can artificially alter animal response to light at a behavioral level. Our findings show that the interaction between chlorophyll derivatives and opsins virtually concerns the vast majority of bilaterian animals, and also occurs in visual systems based on rhabdomeric (rather than ciliary) opsins.
NASA Astrophysics Data System (ADS)
Ariffin, A.; Samsudin, M. A.; Zain, A. N. Md.; Hamzah, N.; Ismail, M. E.
2017-05-01
The Engineering Drawing subject develops skills in geometry drawing becoming more professional. For the concept in Engineering Drawing, students need to have good visualization skills. Visualization is needed to help students get a start before translating into a drawing. So that, Problem Based Learning (PBL) using animation mode (PBL-A) and graphics mode (PBL-G) will be implemented in class. Problem-solving process is repeatedly able to help students interpret engineering drawings step work correctly and accurately. This study examined the effects of PBL-A online and PBL-G online on visualization skills of students in polytechnics. Sixty eight mechanical engineering students have been involved in this study. The visualization test adapted from Bennett, Seashore and Wesman was used in this study. Results showed significant differences in mean scores post-test of visualization skills among the students enrolled in PBL-G with the group of students who attended PBL-A online after effects of pre-test mean score is controlled. Therefore, the effects of animation modes have a positive impact on increasing students’ visualization skills.
Perceptual learning in a non-human primate model of artificial vision
Killian, Nathaniel J.; Vurro, Milena; Keith, Sarah B.; Kyada, Margee J.; Pezaris, John S.
2016-01-01
Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation. PMID:27874058
Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss
Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde
2015-01-01
The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788
The Effects of Attention Cueing on Visualizers' Multimedia Learning
ERIC Educational Resources Information Center
Yang, Hui-Yu
2016-01-01
The present study examines how various types of attention cueing and cognitive preference affect learners' comprehension of a cardiovascular system and cognitive load. EFL learners were randomly assigned to one of four conditions: non-signal, static-blood-signal, static-blood-static-arrow-signal, and animation-signal. The results indicated that…
Image-based red cell counting for wild animals blood.
Mauricio, Claudio R M; Schneider, Fabio K; Dos Santos, Leonilda Correia
2010-01-01
An image-based red blood cell (RBC) automatic counting system is presented for wild animals blood analysis. Images with 2048×1536-pixel resolution acquired on an optical microscope using Neubauer chambers are used to evaluate RBC counting for three animal species (Leopardus pardalis, Cebus apella and Nasua nasua) and the error found using the proposed method is similar to that obtained for inter observer visual counting method, i.e., around 10%. Smaller errors (e.g., 3%) can be obtained in regions with less grid artifacts. These promising results allow the use of the proposed method either as a complete automatic counting tool in laboratories for wild animal's blood analysis or as a first counting stage in a semi-automatic counting tool.
Prentice Award Lecture 2011: Removing the Brakes on Plasticity in the Amblyopic Brain
Levi, Dennis M.
2012-01-01
Experience-dependent plasticity is closely linked with the development of sensory function. Beyond this sensitive period, developmental plasticity is actively limited; however, new studies provide growing evidence for plasticity in the adult visual system. The amblyopic visual system is an excellent model for examining the “brakes” that limit recovery of function beyond the critical period. While amblyopia can often be reversed when treated early, conventional treatment is generally not undertaken in older children and adults. However new clinical and experimental studies in both animals and humans provide evidence for neural plasticity beyond the critical period. The results suggest that perceptual learning and video game play may be effective in improving a range of visual performance measures and importantly the improvements may transfer to better visual acuity and stereopsis. These findings, along with the results of new clinical trials, suggest that it might be time to re-consider our notions about neural plasticity in amblyopia. PMID:22581119
A magnetic tether system to investigate visual and olfactory mediated flight control in Drosophila.
Duistermars, Brian J; Frye, Mark
2008-11-21
It has been clear for many years that insects use visual cues to stabilize their heading in a wind stream. Many animals track odors carried in the wind. As such, visual stabilization of upwind tracking directly aids in odor tracking. But do olfactory signals directly influence visual tracking behavior independently from wind cues? Also, the recent deluge of research on the neurophysiology and neurobehavioral genetics of olfaction in Drosophila has motivated ever more technically sophisticated and quantitative behavioral assays. Here, we modified a magnetic tether system originally devised for vision experiments by equipping the arena with narrow laminar flow odor plumes. A fly is glued to a small steel pin and suspended in a magnetic field that enables it to yaw freely. Small diameter food odor plumes are directed downward over the fly's head, eliciting stable tracking by a hungry fly. Here we focus on the critical mechanics of tethering, aligning the magnets, devising the odor plume, and confirming stable odor tracking.
A Prototype Visualization of Real-time River Drainage Network Response to Rainfall
NASA Astrophysics Data System (ADS)
Demir, I.; Krajewski, W. F.
2011-12-01
The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS streams rainfall data from NEXRAD radar, and provides three interfaces including animation for rainfall intensity, daily rainfall totals and rainfall accumulations for past 14 days for Iowa. A real-time interactive visualization interface is developed using past rainfall intensity data. The interface creates community-based rainfall products on-demand using watershed boundaries of each community as a mask. Each individual rainfall pixel is tracked in the interface along the drainage network, and the ones drains to same pixel location are accumulated. The interface loads recent rainfall data in five minute intervals that are combined with current values. Latest web technologies are utilized for the development of the interface including HTML 5 Canvas, and JavaScript. The performance of the interface is optimized to run smoothly on modern web browsers. The interface controls allow users to change internal parameters of the system, and operation conditions of the animation. The interface will help communities understand the effects of rainfall on water transport in stream and river networks and make better-informed decisions regarding the threat of floods. This presentation provides an overview of a unique visualization interface and discusses future plans for real-time dynamic presentations of streamflow forecasting.
A Web-based Data Intensive Visualization of Real-time River Drainage Network Response to Rainfall
NASA Astrophysics Data System (ADS)
Demir, I.; Krajewski, W. F.
2012-04-01
The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS streams rainfall data from NEXRAD radar, and provides three interfaces including animation for rainfall intensity, daily rainfall totals and rainfall accumulations for past 14 days for Iowa. A real-time interactive visualization interface is developed using past rainfall intensity data. The interface creates community-based rainfall products on-demand using watershed boundaries of each community as a mask. Each individual rainfall pixel is tracked in the interface along the drainage network, and the ones drains to same pixel location are accumulated. The interface loads recent rainfall data in five minute intervals that are combined with current values. Latest web technologies are utilized for the development of the interface including HTML 5 Canvas, and JavaScript. The performance of the interface is optimized to run smoothly on modern web browsers. The interface controls allow users to change internal parameters of the system, and operation conditions of the animation. The interface will help communities understand the effects of rainfall on water transport in stream and river networks and make better-informed decisions regarding the threat of floods. This presentation provides an overview of a unique visualization interface and discusses future plans for real-time dynamic presentations of streamflow forecasting.
Students' Understanding of Salt Dissolution: Visualizing Animation in the Chemistry Classroom
NASA Astrophysics Data System (ADS)
Malkoc, Ummuhan
The present study explored the effect of animation implementation in learning a chemistry topic. 135 high school students taking chemistry class were selected for this study (quasi-experimental groups = 67 and control groups = 68). Independent samples t-tests were run to compare animation and control groups between and within the schools. The over-arching finding of this research indicated that when science teachers used animations while teaching salt dissolution phenomena, students will benefit the application of animations. In addition, the findings informed the TPACK framework on the idea that visual tools are important in students' understanding of salt dissolution concepts.
Ultrasonographic anatomy of bearded dragons (Pogona vitticeps).
Bucy, Daniel S; Guzman, David Sanchez-Migallon; Zwingenberger, Allison L
2015-04-15
To determine which organs can be reliably visualized ultrasonographically in bearded dragons (Pogona vitticeps), describe their normal ultrasonographic appearance, and describe an ultrasonographic technique for use with this species. Cross-sectional study. 14 healthy bearded dragons (6 females and 8 males). Bearded dragons were manually restrained in dorsal and sternal recumbency, and coelomic organs were evaluated by use of linear 7- to 15-MHz and microconvex 5- to 8-MHz transducers. Visibility, size, echogenicity, and ultrasound transducer position were assessed for each organ. Coelomic ultrasonography with both microconvex and linear ultrasound transducers allowed for visualization of the heart, pleural surface of the lungs, liver, caudal vena cava, aorta, ventral abdominal vein, gallbladder, fat bodies, gastric fundus, cecum, colon, cloaca, kidneys, and testes or ovaries in all animals. The pylorus was visualized in 12 of 14 animals. The small intestinal loops were visualized in 12 of 14 animals with the linear transducer, but could not be reliably identified with the microconvex transducer. The hemipenes were visualized in 7 of 8 males. The adrenal glands and spleen were not identified in any animal. Anechoic free coelomic fluid was present in 11 of 14 animals. Heart width, heart length, ventricular wall thickness, gastric fundus wall thickness, and height of the caudal poles of the kidneys were positively associated with body weight. Testis width was negatively associated with body weight in males. Results indicated coelomic ultrasonography is a potentially valuable imaging modality for assessment of most organs in bearded dragons and can be performed in unsedated animals.
Vision for perception and vision for action in the primate brain.
Goodale, M A
1998-01-01
Visual systems first evolved not to enable animals to see, but to provide distal sensory control of their movements. Vision as 'sight' is a relative newcomer to the evolutionary landscape, but its emergence has enabled animals to carry out complex cognitive operations on perceptual representations of the world. The two streams of visual processing that have been identified in the primate cerebral cortex are a reflection of these two functions of vision. The dorsal 'action' stream projecting from primary visual cortex to the posterior parietal cortex provides flexible control of more ancient subcortical visuomotor modules for the production of motor acts. The ventral 'perceptual' stream projecting from the primary visual cortex to the temporal lobe provides the rich and detailed representation of the world required for cognitive operations. Both streams process information about the structure of objects and about their spatial locations--and both are subject to the modulatory influences of attention. Each stream, however, uses visual information in different ways. Transformations carried out in the ventral stream permit the formation of perceptual representations that embody the enduring characteristics of objects and their relations; those carried out in the dorsal stream which utilize moment-to-moment information about objects within egocentric frames of reference, mediate the control of skilled actions. Both streams work together in the production of goal-directed behaviour.
Gordo, D G M; Espigolan, R; Tonussi, R L; Júnior, G A F; Bresolin, T; Magalhães, A F Braga; Feitosa, F L; Baldi, F; Carvalheiro, R; Tonhati, H; de Oliveira, H N; Chardulo, L A L; de Albuquerque, L G
2016-05-01
The objective of this study was to determine whether visual scores used as selection criteria in Nellore breeding programs are effective indicators of carcass traits measured after slaughter. Additionally, this study evaluated the effect of different structures of the relationship matrix ( and ) on the estimation of genetic parameters and on the prediction accuracy of breeding values. There were 13,524 animals for visual scores of conformation (CS), finishing precocity (FP), and muscling (MS) and 1,753, 1,747, and 1,564 for LM area (LMA), backfat thickness (BF), and HCW, respectively. Of these, 1,566 animals were genotyped using a high-density panel containing 777,962 SNP. Six analyses were performed using multitrait animal models, each including the 3 visual scores and 1 carcass trait. For the visual scores, the model included direct additive genetic and residual random effects and the fixed effects of contemporary group (defined by year of birth, management group at yearling, and farm) and the linear effect of age of animal at yearling. The same model was used for the carcass traits, replacing the effect of age of animal at yearling with the linear effect of age of animal at slaughter. The variance and covariance components were estimated by the REML method in analyses using the numerator relationship matrix () or combining the genomic and the numerator relationship matrices (). The heritability estimates for the visual scores obtained with the 2 methods were similar and of moderate magnitude (0.23-0.34), indicating that these traits should response to direct selection. The heritabilities for LMA, BF, and HCW were 0.13, 0.07, and 0.17, respectively, using matrix and 0.29, 0.16, and 0.23, respectively, using matrix . The genetic correlations between the visual scores and carcass traits were positive, and higher correlations were generally obtained when matrix was used. Considering the difficulties and cost of measuring carcass traits postmortem, visual scores of CS, FP, and MS could be used as selection criteria to improve HCW, BF, and LMA. The use of genomic information permitted the detection of greater additive genetic variability for LMA and BF. For HCW, the high magnitude of the genetic correlations with visual scores was probably sufficient to recover genetic variability. The methods provided similar breeding value accuracies, especially for the visual scores.
Hermann, M
2002-05-01
The rigorous implementation of clear preoperative information is mandatory for the patient's understanding, acceptance and written informed consent to all diagnostic and surgical procedures. In the present study, I evaluated whether new media are suitable for conveying basic information to patients; I analysed the merits of computerized animation to illustrate a difficult treatment process, i.e., the progressive steps of a thyroid operation, in comparison to the use of conventional flyers. 3D animation software was employed to illustrate the basic anatomy of the thyroid and the larnyx; the principle of thyroidectomy was explained by visualizing the surgical procedure step by step. Finally, the possible complications that may result from the intraoperative manipulations were also visually explained. Eighty patients entered a prospective randomisation: on the day before surgery, group 1 watched the computer animation, whereas group 2 was given the identical information in a written text (= standard flyer). The evaluation included a questionnaire with scores of 1-5, rating the patients' understanding, subjective and objective knowledge, emotional factors like anxiety and trust, and the willingness to undergo an operation. Understanding of and subjective knowledge about the surgical procedure and possible complications, the degree of trust in professional treatment, the reduction in anxiety and readiness for the operation were significantly better after watching the computer animation than after reading the text. However, active knowledge did not improve significantly. The interest in the preoperative information was high in both groups. The benefit of computer animation was enhanced in a second inquiry; patients who had only read the text had a significant improvement in parameters after an additional exposure to the video animation. Preoperative surgical information can be optimized by presenting the operative procedure via computer animation. Nowadays, several types of new media such as the world wide web, CD, DVD, and digital TV are readily available and--as shown here--suitable for effective visual explanation. Most patients are familiar with acquiring new information by one of these means. An appropriately designed 3D repre-sentation is met with a high level of acceptance, as the present study clearly shows. Modern patient-based information systems are necessary. They can no longer be the sole responsibility of the medical profession, but must be on the agenda of hospital managements and of medical care systems as well.
Cross-modal individual recognition in wild African lions.
Gilfillan, Geoffrey; Vitale, Jessica; McNutt, John Weldon; McComb, Karen
2016-08-01
Individual recognition is considered to have been fundamental in the evolution of complex social systems and is thought to be a widespread ability throughout the animal kingdom. Although robust evidence for individual recognition remains limited, recent experimental paradigms that examine cross-modal processing have demonstrated individual recognition in a range of captive non-human animals. It is now highly relevant to test whether cross-modal individual recognition exists within wild populations and thus examine how it is employed during natural social interactions. We address this question by testing audio-visual cross-modal individual recognition in wild African lions (Panthera leo) using an expectancy-violation paradigm. When presented with a scenario where the playback of a loud-call (roaring) broadcast from behind a visual block is incongruent with the conspecific previously seen there, subjects responded more strongly than during the congruent scenario where the call and individual matched. These findings suggest that lions are capable of audio-visual cross-modal individual recognition and provide a useful method for studying this ability in wild populations. © 2016 The Author(s).
Eyes Matched to the Prize: The State of Matched Filters in Insect Visual Circuits.
Kohn, Jessica R; Heath, Sarah L; Behnia, Rudy
2018-01-01
Confronted with an ever-changing visual landscape, animals must be able to detect relevant stimuli and translate this information into behavioral output. A visual scene contains an abundance of information: to interpret the entirety of it would be uneconomical. To optimally perform this task, neural mechanisms exist to enhance the detection of important features of the sensory environment while simultaneously filtering out irrelevant information. This can be accomplished by using a circuit design that implements specific "matched filters" that are tuned to relevant stimuli. Following this rule, the well-characterized visual systems of insects have evolved to streamline feature extraction on both a structural and functional level. Here, we review examples of specialized visual microcircuits for vital behaviors across insect species, including feature detection, escape, and estimation of self-motion. Additionally, we discuss how these microcircuits are modulated to weigh relevant input with respect to different internal and behavioral states.
Epidemiology and quality assurance: applications at farm level.
Noordhuizen, J P; Frankena, K
1999-03-29
Animal production is relevant with respect to farm income and the position of the sector in the market, but also with respect to the quality and safety of products of animal origin, related to public health. Animal production is part of a chain of food production. Therefore, producers have to take consumer expectations and demands in the domains of animal health, welfare and environment into account. A different attitude for production has to be adopted; this attitude can be visualized in good farming practice, GFP, codes. Farmers who focused on quality in its broadest sense need a system supporting them in their management and control of quality risks. Generally speaking, there are three systems for that purpose: GFP, ISO and HACCP. When the hypothesis followed relates to animal health being a feature of quality, or else welfare and environmental issues, then animal health care can be executed following quality control principles. The HACCP concept is well suited for quality control at farm level, involving risk identification and risk management. The on-farm monitoring and surveillance system of critical control points in the animal production process is the most important tool in this procedure. Principles for HACCP application as well as certification fitness of HACCP are elaborated upon. They are illustrated by using salmonellosis in meat-pig farms as objective for an HACCP approach. It is further discussed that, in addition to animal health and quality, animal welfare and environmental issues could also be covered by an HACCP-like system in an integrated manner. Ultimately, the HACCP modules could end up in an overall ISO certification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McAllister, R.S.
Contents: Visual Acquisition Functions in Operational Environments; Investigation of Causes of Military Aircraft Accidents Involving Pilot Vertigo/Disorientation; Long Term Pulmonary Effects of Repeated Use of 100% Oxygen; Effects of Microwave Radiation on Naval Personnel; Effects of Extremely Low Frequency Radiation on Man; Behavioral Characteristics of Monkeys and Rats Irradiated with Microwaves; Evaluation of the Squirrel Monkey (Saimiri sciureus) as an Experimental Animal Model for Dysbaric Osteonecrosis; Oculovestibular Effects on Visual Performance in Moving Military Systems; Chronic Exposure of Mammals to Non-ionizing Electric and Magnetic Fields--Physiological and Psychophysiological Effects; and Open Literature Publications by Staff Members.
Single and Multiple Visual Systems in Arthropods
Wald, George
1968-01-01
Extraction of two visual pigments from crayfish eyes prompted an electrophysiological examination of the role of visual pigments in the compound eyes of six arthropods. The intact animals were used; in crayfishes isolated eyestalks also. Thresholds were measured in terms of the absolute or relative numbers of photons per flash at various wavelengths needed to evoke a constant amplitude of electroretinogram, usually 50 µv. Two species of crayfish, as well as the green crab, possess blue- and red-sensitive receptors apparently arranged for color discrimination. In the northern crayfish, Orconectes virilis, the spectral sensitivity of the dark-adapted eye is maximal at about 550 mµ, and on adaptation to bright red or blue lights breaks into two functions with λmax respectively at about 435 and 565 mµ, apparently emanating from different receptors. The swamp crayfish, Procambarus clarkii, displays a maximum sensitivity when dark-adapted at about 570 mµ, that breaks on color adaptation into blue- and red-sensitive functions with λmax about 450 and 575 mµ, again involving different receptors. Similarly the green crab, Carcinides maenas, presents a dark-adapted sensitivity maximal at about 510 mµ that divides on color adaptation into sensitivity curves maximal near 425 and 565 mµ. Each of these organisms thus possesses an apparatus adequate for at least two-color vision, resembling that of human green-blinds (deuteranopes). The visual pigments of the red-sensitive systems have been extracted from the crayfish eyes. The horse-shoe crab, Limulus, and the lobster each possesses a single visual system, with λmax respectively at 520 and 525 mµ. Each of these is invariant with color adaptation. In each case the visual pigment had already been identified in extracts. The spider crab, Libinia emarginata, presents another variation. It possesses two visual systems apparently differentiated, not for color discrimination but for use in dim and bright light, like vertebrate rods and cones. The spectral sensitivity of the dark-adapted eye is maximal at about 490 mµ and on light adaptation, whether to blue, red, or white light, is displaced toward shorter wavelengths in what is essentially a reverse Purkinje shift. In all these animals dark adaptation appears to involve two phases: a rapid, hyperbolic fall of log threshold associated probably with visual pigment regeneration, followed by a slow, almost linear fall of log threshold that may be associated with pigment migration. PMID:5641632
NASA Astrophysics Data System (ADS)
Zhang, Pengfei; Zam, Azhar; Pugh, Edward N.; Zawadzki, Robert J.
2014-02-01
Animal models of human diseases play an important role in studying and advancing our understanding of these conditions, allowing molecular level studies of pathogenesis as well as testing of new therapies. Recently several non-invasive imaging modalities including Fundus Camera, Scanning Laser Ophthalmoscopy (SLO) and Optical Coherence Tomography (OCT) have been successfully applied to monitor changes in the retinas of the living animals in experiments in which a single animal is followed over a portion of its lifespan. Here we evaluate the capabilities and limitations of these three imaging modalities for visualization of specific structures in the mouse eye. Example images acquired from different types of mice are presented. Future directions of development for these instruments and potential advantages of multi-modal imaging systems are discussed as well.
Muellner, Ulrich J; Vial, Flavie; Wohlfender, Franziska; Hadorn, Daniela; Reist, Martin; Muellner, Petra
2015-01-01
The reporting of outputs from health surveillance systems should be done in a near real-time and interactive manner in order to provide decision makers with powerful means to identify, assess, and manage health hazards as early and efficiently as possible. While this is currently rarely the case in veterinary public health surveillance, reporting tools do exist for the visual exploration and interactive interrogation of health data. In this work, we used tools freely available from the Google Maps and Charts library to develop a web application reporting health-related data derived from slaughterhouse surveillance and from a newly established web-based equine surveillance system in Switzerland. Both sets of tools allowed entry-level usage without or with minimal programing skills while being flexible enough to cater for more complex scenarios for users with greater programing skills. In particular, interfaces linking statistical softwares and Google tools provide additional analytical functionality (such as algorithms for the detection of unusually high case occurrences) for inclusion in the reporting process. We show that such powerful approaches could improve timely dissemination and communication of technical information to decision makers and other stakeholders and could foster the early-warning capacity of animal health surveillance systems.
Visualizing topography: Effects of presentation strategy, gender, and spatial ability
NASA Astrophysics Data System (ADS)
McAuliffe, Carla
2003-10-01
This study investigated the effect of different presentation strategies (2-D static visuals, 3-D animated visuals, and 3-D interactive, animated visuals) and gender on achievement, time-spent-on visual treatment, and attitude during a computer-based science lesson about reading and interpreting topographic maps. The study also examined the relationship of spatial ability and prior knowledge to gender, achievement, and time-spent-on visual treatment. Students enrolled in high school chemistry-physics were pretested and given two spatial ability tests. They were blocked by gender and randomly assigned to one of three levels of presentation strategy or the control group. After controlling for the effects of spatial ability and prior knowledge with analysis of covariance, three significant differences were found between the versions: (a) the 2-D static treatment group scored significantly higher on the posttest than the control group; (b) the 3-D animated treatment group scored significantly higher on the posttest than the control group; and (c) the 2-D static treatment group scored significantly higher on the posttest than the 3-D interactive animated treatment group. Furthermore, the 3-D interactive animated treatment group spent significantly more time on the visual screens than the 2-D static treatment group. Analyses of student attitudes revealed that most students felt the landform visuals in the computer-based program helped them learn, but not in a way they would describe as fun. Significant differences in attitude were found by treatment and by gender. In contrast to findings from other studies, no gender differences were found on either of the two spatial tests given in this study. Cognitive load, cognitive involvement, and solution strategy are offered as three key factors that may help explain the results of this study. Implications for instructional design include suggestions about the use of 2-D static, 3-D animated and 3-D interactive animations as well as a recommendation about the inclusion of pretests in similar instructional programs. Areas for future research include investigating the effects of combinations of presentation strategies, continuing to examine the role of spatial ability in science achievement, and gaining cognitive insights about what it is that students do when learning to read and interpret topographic maps.
Cortico-fugal output from visual cortex promotes plasticity of innate motor behaviour.
Liu, Bao-Hua; Huberman, Andrew D; Scanziani, Massimo
2016-10-20
The mammalian visual cortex massively innervates the brainstem, a phylogenetically older structure, via cortico-fugal axonal projections. Many cortico-fugal projections target brainstem nuclei that mediate innate motor behaviours, but the function of these projections remains poorly understood. A prime example of such behaviours is the optokinetic reflex (OKR), an innate eye movement mediated by the brainstem accessory optic system, that stabilizes images on the retina as the animal moves through the environment and is thus crucial for vision. The OKR is plastic, allowing the amplitude of this reflex to be adaptively adjusted relative to other oculomotor reflexes and thereby ensuring image stability throughout life. Although the plasticity of the OKR is thought to involve subcortical structures such as the cerebellum and vestibular nuclei, cortical lesions have suggested that the visual cortex might also be involved. Here we show that projections from the mouse visual cortex to the accessory optic system promote the adaptive plasticity of the OKR. OKR potentiation, a compensatory plastic increase in the amplitude of the OKR in response to vestibular impairment, is diminished by silencing visual cortex. Furthermore, targeted ablation of a sparse population of cortico-fugal neurons that specifically project to the accessory optic system severely impairs OKR potentiation. Finally, OKR potentiation results from an enhanced drive exerted by the visual cortex onto the accessory optic system. Thus, cortico-fugal projections to the brainstem enable the visual cortex, an area that has been principally studied for its sensory processing function, to plastically adapt the execution of innate motor behaviours.
Li, Lei; Sahi, Sunil K; Peng, Mingying; Lee, Eric B; Ma, Lun; Wojtowicz, Jennifer L; Malin, John H; Chen, Wei
2016-02-10
We developed new optic devices - singly-doped luminescence glasses and nanoparticle-coated lenses that convert UV light to visible light - for improvement of visual system functions. Tb(3+) or Eu(3+) singly-doped borate glasses or CdS-quantum dot (CdS-QD) coated lenses efficiently convert UV light to 542 nm or 613 nm wavelength narrow-band green or red light, or wide-spectrum white light, and thereby provide extra visible light to the eye. In zebrafish (wild-type larvae and adult control animals, retinal degeneration mutants, and light-induced photoreceptor cell degeneration models), the use of Tb(3+) or Eu(3+) doped luminescence glass or CdS-QD coated glass lenses provide additional visible light to the rod and cone photoreceptor cells, and thereby improve the visual system functions. The data provide proof-of-concept for the future development of optic devices for improvement of visual system functions in patients who suffer from photoreceptor cell degeneration or related retinal diseases.
Visualizing Dispersion Interactions
ERIC Educational Resources Information Center
Gottschalk, Elinor; Venkataraman, Bhawani
2014-01-01
An animation and accompanying activity has been developed to help students visualize how dispersion interactions arise. The animation uses the gecko's ability to walk on vertical surfaces to illustrate how dispersion interactions play a role in macroscale outcomes. Assessment of student learning reveals that students were able to develop…
Riazi, Mariam; Marcario, Joanne K; Samson, Frank K.; Kenjale, Himanshu; Adany, Istvan; Staggs, Vincent; Ledford, Emily; Marquis, Janet; Narayan, Opendra; Cheney, Paul D.
2013-01-01
Our work characterizes the effects of opiate (morphine) dependence on auditory brainstem and visual evoked responses in a rhesus macaque model of neuro-AIDS utilizing a chronic continuous drug delivery paradigm. The goal of this study was to clarify whether morphine is protective, or if it exacerbates simian immunodeficiency virus (SIV) related systemic and neurological disease. Our model employs a macrophage tropic CD4/CCR5 co-receptor virus, SIVmac239 (R71/E17), which crosses the blood brain barrier shortly after inoculation and closely mimics the natural disease course of human immunodeficiency virus (HIV) infection. The cohort was divided into 3 groups: morphine only, SIV only, and SIV + morphine. Evoked potential (EP) abnormalities in sub-clinically infected macaques were evident as early as eight weeks post-inoculation. Prolongations in EP latencies were observed in SIV-infected macaques across all modalities. Animals with the highest CSF viral loads and clinical disease showed more abnormalities than those with sub-clinical disease, confirming our previous work (Raymond et al, 1998, 1999, 2000). Although some differences were observed in auditory and visual evoked potentials in morphine treated compared to untreated SIV-infected animals, the effects were relatively small and not consistent across evoked potential type. However, morphine treated animals with subclinical disease had a clear tendency toward higher virus loads in peripheral and CNS tissues (Marcario et al., 2008) suggesting that if had been possible to follow all animals to end-stage disease, a clearer pattern of evoked potential abnormality might have emerged. PMID:19283490
Rieucau, Guillaume; Burke, Darren
2017-01-01
Abstract Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus, a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards’ ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species. PMID:29491965
ParaView visualization of Abaqus output on the mechanical deformation of complex microstructures
NASA Astrophysics Data System (ADS)
Liu, Qingbin; Li, Jiang; Liu, Jie
2017-02-01
Abaqus® is a popular software suite for finite element analysis. It delivers linear and nonlinear analyses of mechanical and fluid dynamics, includes multi-body system and multi-physics coupling. However, the visualization capability of Abaqus using its CAE module is limited. Models from microtomography have extremely complicated structures, and datasets of Abaqus output are huge, requiring a visualization tool more powerful than Abaqus/CAE. We convert Abaqus output into the XML-based VTK format by developing a Python script and then using ParaView to visualize the results. Such capabilities as volume rendering, tensor glyphs, superior animation and other filters allow ParaView to offer excellent visualizing manifestations. ParaView's parallel visualization makes it possible to visualize very big data. To support full parallel visualization, the Python script achieves data partitioning by reorganizing all nodes, elements and the corresponding results on those nodes and elements. The data partition scheme minimizes data redundancy and works efficiently. Given its good readability and extendibility, the script can be extended to the processing of more different problems in Abaqus. We share the script with Abaqus users on GitHub.
NASA Astrophysics Data System (ADS)
Moon, Hye Sun
Visuals are most extensively used as instructional tools in education to present spatially-based information. Recent computer technology allows the generation of 3D animated visuals to extend the presentation in computer-based instruction. Animated visuals in 3D representation not only possess motivational value that promotes positive attitudes toward instruction but also facilitate learning when the subject matter requires dynamic motion and 3D visual cue. In this study, three questions are explored: (1) how 3D graphics affects student learning and attitude, in comparison with 2D graphics; (2) how animated graphics affects student learning and attitude, in comparison with static graphics; and (3) whether the use of 3D graphics, when they are supported by interactive animation, is the most effective visual cues to improve learning and to develop positive attitudes. A total of 145 eighth-grade students participated in a 2 x 2 factorial design study. The subjects were randomly assigned to one of four computer-based instructions: 2D static; 2D animated; 3D static; and 3D animated. The results indicated that: (1) Students in the 3D graphic condition exhibited more positive attitudes toward instruction than those in the 2D graphic condition. No group differences were found between the posttest score of 3D graphic condition and that of 2D graphic condition. However, students in the 3D graphic condition took less time for information retrieval on posttest than those in the 2D graphic condition. (2) Students in the animated graphic condition exhibited slightly more positive attitudes toward instruction than those in the static graphic condition. No group differences were found between the posttest score of animated graphic condition and that of static graphic condition. However, students in the animated graphic condition took less time for information retrieval on posttest than those in the static graphic condition. (3) Students in the 3D animated graphic condition exhibited more positive attitudes toward instruction than those in other treatment conditions (2D static, 2D animated, and 3D static conditions). No group differences were found in the posttest scores among four treatment conditions. However, students in the 3D animated condition took less time for information retrieval on posttest than those in other treatment conditions.
People can understand descriptions of motion without activating visual motion brain regions
Dravida, Swethasri; Saxe, Rebecca; Bedny, Marina
2013-01-01
What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., “to bounce”) and low motion (e.g., “to look”) words. In task 2, participants made plausibility judgments for passages describing movement (“A centaur hurled a spear … ”) or cognitive events (“A gentleman loved cheese …”). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems. PMID:24009592
Reduced opsin gene expression in a cave-dwelling fish
Tobler, Michael; Coleman, Seth W.; Perkins, Brian D.; Rosenthal, Gil G.
2010-01-01
Regressive evolution of structures associated with vision in cave-dwelling organisms is the focus of intense research. Most work has focused on differences between extreme visual phenotypes: sighted, surface animals and their completely blind, cave-dwelling counterparts. We suggest that troglodytic systems, comprising multiple populations that vary along a gradient of visual function, may prove critical in understanding the mechanisms underlying initial regression in visual pathways. Gene expression assays of natural and laboratory-reared populations of the Atlantic molly (Poecilia mexicana) revealed reduced opsin expression in cave-dwelling populations compared with surface-dwelling conspecifics. Our results suggest that the reduction in opsin expression in cave-dwelling populations is not phenotypically plastic but reflects a hardwired system not rescued by exposure to light during retinal ontogeny. Changes in opsin gene expression may consequently represent a first evolutionary step in the regression of eyes in cave organisms. PMID:19740890
Recent results in visual servoing
NASA Astrophysics Data System (ADS)
Chaumette, François
2008-06-01
Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.
Sound imaging of nocturnal animal calls in their natural habitat.
Mizumoto, Takeshi; Aihara, Ikkyu; Otsuka, Takuma; Takeda, Ryu; Aihara, Kazuyuki; Okuno, Hiroshi G
2011-09-01
We present a novel method for imaging acoustic communication between nocturnal animals. Investigating the spatio-temporal calling behavior of nocturnal animals, e.g., frogs and crickets, has been difficult because of the need to distinguish many animals' calls in noisy environments without being able to see them. Our method visualizes the spatial and temporal dynamics using dozens of sound-to-light conversion devices (called "Firefly") and an off-the-shelf video camera. The Firefly, which consists of a microphone and a light emitting diode, emits light when it captures nearby sound. Deploying dozens of Fireflies in a target area, we record calls of multiple individuals through the video camera. We conduct two experiments, one indoors and the other in the field, using Japanese tree frogs (Hyla japonica). The indoor experiment demonstrates that our method correctly visualizes Japanese tree frogs' calling behavior. It has confirmed the known behavior; two frogs call synchronously or in anti-phase synchronization. The field experiment (in a rice paddy where Japanese tree frogs live) also visualizes the same calling behavior to confirm anti-phase synchronization in the field. Experimental results confirm that our method can visualize the calling behavior of nocturnal animals in their natural habitat.
Higher-order neural processing tunes motion neurons to visual ecology in three species of hawkmoths.
Stöckl, A L; O'Carroll, D; Warrant, E J
2017-06-28
To sample information optimally, sensory systems must adapt to the ecological demands of each animal species. These adaptations can occur peripherally, in the anatomical structures of sensory organs and their receptors; and centrally, as higher-order neural processing in the brain. While a rich body of investigations has focused on peripheral adaptations, our understanding is sparse when it comes to central mechanisms. We quantified how peripheral adaptations in the eyes, and central adaptations in the wide-field motion vision system, set the trade-off between resolution and sensitivity in three species of hawkmoths active at very different light levels: nocturnal Deilephila elpenor, crepuscular Manduca sexta , and diurnal Macroglossum stellatarum. Using optical measurements and physiological recordings from the photoreceptors and wide-field motion neurons in the lobula complex, we demonstrate that all three species use spatial and temporal summation to improve visual performance in dim light. The diurnal Macroglossum relies least on summation, but can only see at brighter intensities. Manduca, with large sensitive eyes, relies less on neural summation than the smaller eyed Deilephila , but both species attain similar visual performance at nocturnal light levels. Our results reveal how the visual systems of these three hawkmoth species are intimately matched to their visual ecologies. © 2017 The Author(s).
Cortesi, Fabio; Musilová, Zuzana; Stieb, Sara M; Hart, Nathan S; Siebeck, Ulrike E; Cheney, Karen L; Salzburger, Walter; Marshall, N Justin
2016-08-15
Animals often change their habitat throughout ontogeny; yet, the triggers for habitat transitions and how these correlate with developmental changes - e.g. physiological, morphological and behavioural - remain largely unknown. Here, we investigated how ontogenetic changes in body coloration and of the visual system relate to habitat transitions in a coral reef fish. Adult dusky dottybacks, Pseudochromis fuscus, are aggressive mimics that change colour to imitate various fishes in their surroundings; however, little is known about the early life stages of this fish. Using a developmental time series in combination with the examination of wild-caught specimens, we revealed that dottybacks change colour twice during development: (i) nearly translucent cryptic pelagic larvae change to a grey camouflage coloration when settling on coral reefs; and (ii) juveniles change to mimic yellow- or brown-coloured fishes when reaching a size capable of consuming juvenile fish prey. Moreover, microspectrophotometric (MSP) and quantitative real-time PCR (qRT-PCR) experiments show developmental changes of the dottyback visual system, including the use of a novel adult-specific visual gene (RH2 opsin). This gene is likely to be co-expressed with other visual pigments to form broad spectral sensitivities that cover the medium-wavelength part of the visible spectrum. Surprisingly, the visual modifications precede changes in habitat and colour, possibly because dottybacks need to first acquire the appropriate visual performance before transitioning into novel life stages. © 2016. Published by The Company of Biologists Ltd.
Intuitive representation of surface properties of biomolecules using BioBlender.
Andrei, Raluca Mihaela; Callieri, Marco; Zini, Maria Francesca; Loni, Tiziana; Maraziti, Giuseppe; Pan, Mike Chen; Zoppè, Monica
2012-03-28
In living cells, proteins are in continuous motion and interaction with the surrounding medium and/or other proteins and ligands. These interactions are mediated by protein features such as electrostatic and lipophilic potentials. The availability of protein structures enables the study of their surfaces and surface characteristics, based on atomic contribution. Traditionally, these properties are calculated by physico-chemical programs and visualized as range of colors that vary according to the tool used and imposes the necessity of a legend to decrypt it. The use of color to encode both characteristics makes the simultaneous visualization almost impossible, requiring these features to be visualized in different images. In this work, we describe a novel and intuitive code for the simultaneous visualization of these properties. Recent advances in 3D animation and rendering software have not yet been exploited for the representation of biomolecules in an intuitive, animated form. For our purpose we use Blender, an open-source, free, cross-platform application used professionally for 3D work. On the basis Blender, we developed BioBlender, dedicated to biological work: elaboration of protein motion with simultaneous visualization of their chemical and physical features. Electrostatic and lipophilic potentials are calculated using physico-chemical software and scripts, organized and accessed through BioBlender interface. A new visual code is introduced for molecular lipophilic potential: a range of optical features going from smooth-shiny for hydrophobic regions to rough-dull for hydrophilic ones. Electrostatic potential is represented as animated line particles that flow along field lines, proportional to the total charge of the protein. Our system permits visualization of molecular features and, in the case of moving proteins, their continuous perception, calculated for each conformation during motion. Using real world tactile/sight feelings, the nanoscale world of proteins becomes more understandable, familiar to our everyday life, making it easier to introduce "un-seen" phenomena (concepts) such as hydropathy or charges. Moreover, this representation contributes to gain insight into molecular functions by drawing viewer's attention to the most active regions of the protein. The program, available for Windows, Linux and MacOS, can be downloaded freely from the dedicated website http://www.bioblender.eu.
Intuitive representation of surface properties of biomolecules using BioBlender
2012-01-01
Background In living cells, proteins are in continuous motion and interaction with the surrounding medium and/or other proteins and ligands. These interactions are mediated by protein features such as electrostatic and lipophilic potentials. The availability of protein structures enables the study of their surfaces and surface characteristics, based on atomic contribution. Traditionally, these properties are calculated by physico-chemical programs and visualized as range of colors that vary according to the tool used and imposes the necessity of a legend to decrypt it. The use of color to encode both characteristics makes the simultaneous visualization almost impossible, requiring these features to be visualized in different images. In this work, we describe a novel and intuitive code for the simultaneous visualization of these properties. Methods Recent advances in 3D animation and rendering software have not yet been exploited for the representation of biomolecules in an intuitive, animated form. For our purpose we use Blender, an open-source, free, cross-platform application used professionally for 3D work. On the basis Blender, we developed BioBlender, dedicated to biological work: elaboration of protein motion with simultaneous visualization of their chemical and physical features. Electrostatic and lipophilic potentials are calculated using physico-chemical software and scripts, organized and accessed through BioBlender interface. Results A new visual code is introduced for molecular lipophilic potential: a range of optical features going from smooth-shiny for hydrophobic regions to rough-dull for hydrophilic ones. Electrostatic potential is represented as animated line particles that flow along field lines, proportional to the total charge of the protein. Conclusions Our system permits visualization of molecular features and, in the case of moving proteins, their continuous perception, calculated for each conformation during motion. Using real world tactile/sight feelings, the nanoscale world of proteins becomes more understandable, familiar to our everyday life, making it easier to introduce "un-seen" phenomena (concepts) such as hydropathy or charges. Moreover, this representation contributes to gain insight into molecular functions by drawing viewer's attention to the most active regions of the protein. The program, available for Windows, Linux and MacOS, can be downloaded freely from the dedicated website http://www.bioblender.eu PMID:22536962
Ibáñez, Alejandro; Polo-Cavia, Nuria; López, Pilar; Martín, José
2014-10-01
Sexual signals can be evolutionarily stable if they are honest and condition dependent or costly to the signaler. One possible cost is the existence of a trade-off between maintaining the immune system and the elaboration of ornaments. This hypothesis has been experimentally tested in some groups of animals but not in others such as turtles. We experimentally challenged the immune system of female red-eared sliders Trachemys scripta elegans, with a bacterial antigen (lipopolysaccharide (LPS)) without pathogenic effects to explore whether the immune activation affected visual colorful ornaments of the head. The LPS injection altered the reflectance patterns of color ornaments. In comparison to the control animals, the yellow chin stripes of injected animals exhibited (1) reduced brightness, (2) lower long wavelength (>470 nm) reflectance, and (3) lower values for carotenoid chroma. The postorbital patches of injected individuals also showed reduced very long wavelength (>570 nm) reflectance but did not change in carotenoid chroma. Thus, experimental turtles showed darker and less "yellowish" chin stripes and less "reddish" postorbital patches at the end of the experiment, whereas control turtles did not change their coloration. This is the first experimental evidence supporting the existence of a trade-off between the immune system and the expression of visual ornaments in turtles. We suggest that this trade-off may allow turtles to honestly signal individual quality via characteristics of coloration, which may have an important role in intersexual selection processes.
NASA Astrophysics Data System (ADS)
Ibáñez, Alejandro; Polo-Cavia, Nuria; López, Pilar; Martín, José
2014-10-01
Sexual signals can be evolutionarily stable if they are honest and condition dependent or costly to the signaler. One possible cost is the existence of a trade-off between maintaining the immune system and the elaboration of ornaments. This hypothesis has been experimentally tested in some groups of animals but not in others such as turtles. We experimentally challenged the immune system of female red-eared sliders Trachemys scripta elegans, with a bacterial antigen (lipopolysaccharide (LPS)) without pathogenic effects to explore whether the immune activation affected visual colorful ornaments of the head. The LPS injection altered the reflectance patterns of color ornaments. In comparison to the control animals, the yellow chin stripes of injected animals exhibited (1) reduced brightness, (2) lower long wavelength (>470 nm) reflectance, and (3) lower values for carotenoid chroma. The postorbital patches of injected individuals also showed reduced very long wavelength (>570 nm) reflectance but did not change in carotenoid chroma. Thus, experimental turtles showed darker and less "yellowish" chin stripes and less "reddish" postorbital patches at the end of the experiment, whereas control turtles did not change their coloration. This is the first experimental evidence supporting the existence of a trade-off between the immune system and the expression of visual ornaments in turtles. We suggest that this trade-off may allow turtles to honestly signal individual quality via characteristics of coloration, which may have an important role in intersexual selection processes.
Neuronal connectome of a sensory-motor circuit for visual navigation
Randel, Nadine; Asadulina, Albina; Bezares-Calderón, Luis A; Verasztó, Csaba; Williams, Elizabeth A; Conzelmann, Markus; Shahidi, Réza; Jékely, Gáspár
2014-01-01
Animals use spatial differences in environmental light levels for visual navigation; however, how light inputs are translated into coordinated motor outputs remains poorly understood. Here we reconstruct the neuronal connectome of a four-eye visual circuit in the larva of the annelid Platynereis using serial-section transmission electron microscopy. In this 71-neuron circuit, photoreceptors connect via three layers of interneurons to motorneurons, which innervate trunk muscles. By combining eye ablations with behavioral experiments, we show that the circuit compares light on either side of the body and stimulates body bending upon left-right light imbalance during visual phototaxis. We also identified an interneuron motif that enhances sensitivity to different light intensity contrasts. The Platynereis eye circuit has the hallmarks of a visual system, including spatial light detection and contrast modulation, illustrating how image-forming eyes may have evolved via intermediate stages contrasting only a light and a dark field during a simple visual task. DOI: http://dx.doi.org/10.7554/eLife.02730.001 PMID:24867217
Multi-camera real-time three-dimensional tracking of multiple flying animals
Straw, Andrew D.; Branson, Kristin; Neumann, Titus R.; Dickinson, Michael H.
2011-01-01
Automated tracking of animal movement allows analyses that would not otherwise be possible by providing great quantities of data. The additional capability of tracking in real time—with minimal latency—opens up the experimental possibility of manipulating sensory feedback, thus allowing detailed explorations of the neural basis for control of behaviour. Here, we describe a system capable of tracking the three-dimensional position and body orientation of animals such as flies and birds. The system operates with less than 40 ms latency and can track multiple animals simultaneously. To achieve these results, a multi-target tracking algorithm was developed based on the extended Kalman filter and the nearest neighbour standard filter data association algorithm. In one implementation, an 11-camera system is capable of tracking three flies simultaneously at 60 frames per second using a gigabit network of nine standard Intel Pentium 4 and Core 2 Duo computers. This manuscript presents the rationale and details of the algorithms employed and shows three implementations of the system. An experiment was performed using the tracking system to measure the effect of visual contrast on the flight speed of Drosophila melanogaster. At low contrasts, speed is more variable and faster on average than at high contrasts. Thus, the system is already a useful tool to study the neurobiology and behaviour of freely flying animals. If combined with other techniques, such as ‘virtual reality’-type computer graphics or genetic manipulation, the tracking system would offer a powerful new way to investigate the biology of flying animals. PMID:20630879
Conditioning laboratory cats to handling and transport.
Gruen, Margaret E; Thomson, Andrea E; Clary, Gillian P; Hamilton, Alexandra K; Hudson, Lola C; Meeker, Rick B; Sherman, Barbara L
2013-10-01
As research subjects, cats have contributed substantially to our understanding of biological systems, from the development of mammalian visual pathways to the pathophysiology of feline immunodeficiency virus as a model for human immunodeficiency virus. Few studies have evaluated humane methods for managing cats in laboratory animal facilities, however, in order to reduce fear responses and improve their welfare. The authors describe a behavioral protocol used in their laboratory to condition cats to handling and transport. Such behavioral conditioning benefits the welfare of the cats, the safety of animal technicians and the quality of feline research data.
The Effect of Animated Banner Advertisements on a Visual Search Task
2001-01-01
experimental result calls into question previous advertising tips suggested by WebWeek, cited in [17]. In 1996, the online magazine recommended that site...prone in the presence of animated banners. Keywords Animation, visual search, banner advertisements , flashing INTRODUCTION As processor and Internet...is the best way to represent the selection tool in a toolbar, where each icon must fit in a small area? Photoshop and other popular painting programs
ePMV embeds molecular modeling into professional animation software environments.
Johnson, Graham T; Autin, Ludovic; Goodsell, David S; Sanner, Michel F; Olson, Arthur J
2011-03-09
Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties, and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. Copyright © 2011 Elsevier Ltd. All rights reserved.
ePMV Embeds Molecular Modeling into Professional Animation Software Environments
Johnson, Graham T.; Autin, Ludovic; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.
2011-01-01
SUMMARY Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers, we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. PMID:21397181
Molecular and Cellular Biology Animations: Development and Impact on Student Learning
ERIC Educational Resources Information Center
McClean, Phillip; Johnson, Christina; Rogers, Roxanne; Daniels, Lisa; Reber, John; Slator, Brian M.; Terpstra, Jeff; White, Alan
2005-01-01
Educators often struggle when teaching cellular and molecular processes because typically they have only two-dimensional tools to teach something that plays out in four dimensions. Learning research has demonstrated that visualizing processes in three dimensions aids learning, and animations are effective visualization tools for novice learners…
Code of Federal Regulations, 2010 CFR
2010-01-01
.... Visual study of the physical appearance, physical condition, and behavior of animals (singly or in groups... other than Category II animals, e.g., cats and dogs. Category II animals. Food and fiber animal species...
A simple integrated system for electrophysiologic recordings in animals
Slater, Bernard J.; Miller, Neil R.; Bernstein, Steven L.; Flower, Robert W.
2009-01-01
This technical note describes a modification to a fundus camera that permits simultaneous recording of pattern electroretinograms (pERGs) and pattern visual evoked potentials (pVEPs). The modification consists of placing an organic light-emitting diode (OLED) in the split-viewer pathway of a fundus camera, in a plane conjugate to the subject’s pupil. In this way, a focused image of the OLED can be delivered to a precisely known location on the retina. The advantage of using an OLED is that it can achieve high luminance while maintaining high contrast, and with minimal degradation over time. This system is particularly useful for animal studies, especially when precise retinal positioning is required. PMID:19137347
COMICS: Cartoon Visualization of Omics Data in Spatial Context Using Anatomical Ontologies
2017-01-01
COMICS is an interactive and open-access web platform for integration and visualization of molecular expression data in anatomograms of zebrafish, carp, and mouse model systems. Anatomical ontologies are used to map omics data across experiments and between an experiment and a particular visualization in a data-dependent manner. COMICS is built on top of several existing resources. Zebrafish and mouse anatomical ontologies with their controlled vocabulary (CV) and defined hierarchy are used with the ontoCAT R package to aggregate data for comparison and visualization. Libraries from the QGIS geographical information system are used with the R packages “maps” and “maptools” to visualize and interact with molecular expression data in anatomical drawings of the model systems. COMICS allows users to upload their own data from omics experiments, using any gene or protein nomenclature they wish, as long as CV terms are used to define anatomical regions or developmental stages. Common nomenclatures such as the ZFIN gene names and UniProt accessions are provided additional support. COMICS can be used to generate publication-quality visualizations of gene and protein expression across experiments. Unlike previous tools that have used anatomical ontologies to interpret imaging data in several animal models, including zebrafish, COMICS is designed to take spatially resolved data generated by dissection or fractionation and display this data in visually clear anatomical representations rather than large data tables. COMICS is optimized for ease-of-use, with a minimalistic web interface and automatic selection of the appropriate visual representation depending on the input data. PMID:29083911
COMICS: Cartoon Visualization of Omics Data in Spatial Context Using Anatomical Ontologies.
Travin, Dmitrii; Popov, Iaroslav; Guler, Arzu Tugce; Medvedev, Dmitry; van der Plas-Duivesteijn, Suzanne; Varela, Monica; Kolder, Iris C R M; Meijer, Annemarie H; Spaink, Herman P; Palmblad, Magnus
2018-01-05
COMICS is an interactive and open-access web platform for integration and visualization of molecular expression data in anatomograms of zebrafish, carp, and mouse model systems. Anatomical ontologies are used to map omics data across experiments and between an experiment and a particular visualization in a data-dependent manner. COMICS is built on top of several existing resources. Zebrafish and mouse anatomical ontologies with their controlled vocabulary (CV) and defined hierarchy are used with the ontoCAT R package to aggregate data for comparison and visualization. Libraries from the QGIS geographical information system are used with the R packages "maps" and "maptools" to visualize and interact with molecular expression data in anatomical drawings of the model systems. COMICS allows users to upload their own data from omics experiments, using any gene or protein nomenclature they wish, as long as CV terms are used to define anatomical regions or developmental stages. Common nomenclatures such as the ZFIN gene names and UniProt accessions are provided additional support. COMICS can be used to generate publication-quality visualizations of gene and protein expression across experiments. Unlike previous tools that have used anatomical ontologies to interpret imaging data in several animal models, including zebrafish, COMICS is designed to take spatially resolved data generated by dissection or fractionation and display this data in visually clear anatomical representations rather than large data tables. COMICS is optimized for ease-of-use, with a minimalistic web interface and automatic selection of the appropriate visual representation depending on the input data.
Self-development of visual space perception by learning from the hand
NASA Astrophysics Data System (ADS)
Chung, Jae-Moon; Ohnishi, Noboru
1998-10-01
Animals have been considered to develop ability for interpreting images captured on their retina by themselves gradually from their birth. For this they do not need external supervisor. We think that the visual function is obtained together with the development of hand reaching and grasping operations which are executed by active interaction with environment. On the viewpoint of hand teaches eye, this paper shows how visual space perception is developed in a simulated robot. The robot has simplified human-like structure used for hand-eye coordination. From the experimental results it may be possible to validate the method to describe how visual space perception of biological systems is developed. In addition the description gives a way to self-calibrate the vision of intelligent robot based on learn by doing manner without external supervision.
Temporal Processing in the Olfactory System: Can We See a Smell?
Gire, David H.; Restrepo, Diego; Sejnowski, Terrence J.; Greer, Charles; De Carlos, Juan A.; Lopez-Mascaraque, Laura
2013-01-01
Sensory processing circuits in the visual and olfactory systems receive input from complex, rapidly changing environments. Although patterns of light and plumes of odor create different distributions of activity in the retina and olfactory bulb, both structures use what appears on the surface similar temporal coding strategies to convey information to higher areas in the brain. We compare temporal coding in the early stages of the olfactory and visual systems, highlighting recent progress in understanding the role of time in olfactory coding during active sensing by behaving animals. We also examine studies that address the divergent circuit mechanisms that generate temporal codes in the two systems, and find that they provide physiological information directly related to functional questions raised by neuroanatomical studies of Ramon y Cajal over a century ago. Consideration of differences in neural activity in sensory systems contributes to generating new approaches to understand signal processing. PMID:23664611
Shankar, S; Ellard, C
2000-02-01
Past research has indicated that many species use the time-to-collision variable but little is known about its neural underpinnings in rodents. In a set of three experiments we set out to replicate and extend the findings of Sun et al. (Sun H-J, Carey DP, Goodale MA. Exp Brain Res 1992;91:171-175) in a visually guided task in Mongolian gerbils, and then investigated the effects of lesions to different cortical areas. We trained Mongolian gerbils to run in the dark toward a target on a computer screen. In some trials the target changed in size as the animal ran toward it in such a way as to produce 'virtual targets' if the animals were using time-to-collision or contact information. In experiment 1 we confirmed that gerbils use time-to-contact information to modulate their speed of running toward a target. In experiment 2 we established that visual cortex lesions attenuate the ability of lesioned animals to use information from the visual target to guide their run, while frontal cortex lesioned animals are not as severely affected. In experiment 3 we found that small radio-frequency lesions, of either area VI or of the lateral extrastriate regions of the visual cortex also affected the use of information from the target to modulate locomotion.
Correction of Refractive Errors in Rhesus Macaques (Macaca mulatta) Involved in Visual Research
Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias
2014-01-01
Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals. PMID:25427343
Correction of refractive errors in rhesus macaques (Macaca mulatta) involved in visual research.
Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias
2014-08-01
Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., inspection. Visual study of the physical appearance, physical condition, and behavior of animals (singly or... 9 Animals and Animal Products 1 2012-01-01 2012-01-01 false Definitions. 160.1 Section 160.1 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE...
Code of Federal Regulations, 2013 CFR
2013-01-01
..., inspection. Visual study of the physical appearance, physical condition, and behavior of animals (singly or... 9 Animals and Animal Products 1 2013-01-01 2013-01-01 false Definitions. 160.1 Section 160.1 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE...
Code of Federal Regulations, 2011 CFR
2011-01-01
..., inspection. Visual study of the physical appearance, physical condition, and behavior of animals (singly or... 9 Animals and Animal Products 1 2011-01-01 2011-01-01 false Definitions. 160.1 Section 160.1 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE...
Kawai, Nobuyuki; Koda, Hiroki
2016-08-01
Humans quickly detect the presence of evolutionary threats through visual perception. Many theorists have considered humans to be predisposed to respond to both snakes and spiders as evolutionarily fear-relevant stimuli. Evidence supports that human adults, children, and snake-naive monkeys all detect pictures of snakes among pictures of flowers more quickly than vice versa, but recent neurophysiological and behavioral studies suggest that spiders may, in fact, be processed similarly to nonthreat animals. The evidence of quick detection and rapid fear learning by primates is limited to snakes, and no such evidence exists for spiders, suggesting qualitative differences between fear of snakes and fear of spiders. Here, we show that snake-naive Japanese monkeys detect a single snake picture among 8 nonthreat animal pictures (koala) more quickly than vice versa; however, no such difference in detection was observed between spiders and pleasant animals. These robust differences between snakes and spiders are the most convincing evidence that the primate visual system is predisposed to pay attention to snakes but not spiders. These findings suggest that attentional bias toward snakes has an evolutionary basis but that bias toward spiders is more due to top-down, conceptually driven effects of emotion on attention capture. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
IGF-1 Restores Visual Cortex Plasticity in Adult Life by Reducing Local GABA Levels
Maya-Vetencourt, José Fernando; Baroncelli, Laura; Viegi, Alessandro; Tiraboschi, Ettore; Castren, Eero; Cattaneo, Antonino; Maffei, Lamberto
2012-01-01
The central nervous system architecture is markedly modified by sensory experience during early life, but a decline of plasticity occurs with age. Recent studies have challenged this dogma providing evidence that both pharmacological treatments and paradigms based on the manipulation of environmental stimulation levels can be successfully employed as strategies for enhancing plasticity in the adult nervous system. Insulin-like growth factor 1 (IGF-1) is a peptide implicated in prenatal and postnatal phases of brain development such as neurogenesis, neuronal differentiation, synaptogenesis, and experience-dependent plasticity. Here, using the visual system as a paradigmatic model, we report that IGF-1 reactivates neural plasticity in the adult brain. Exogenous administration of IGF-1 in the adult visual cortex, indeed, restores the susceptibility of cortical neurons to monocular deprivation and promotes the recovery of normal visual functions in adult amblyopic animals. These effects were accompanied by a marked reduction of intracortical GABA levels. Moreover, we show that a transitory increase of IGF-1 expression is associated to the plasticity reinstatement induced by environmental enrichment (EE) and that blocking IGF-1 action by means of the IGF-1 receptor antagonist JB1 prevents EE effects on plasticity processes. PMID:22720172
Visually based path-planning by Japanese monkeys.
Mushiake, H; Saito, N; Sakamoto, K; Sato, Y; Tanji, J
2001-03-01
To construct an animal model of strategy formation, we designed a maze path-finding task. First, we asked monkeys to capture a goal in the maze by moving a cursor on the screen. Cursor movement was linked to movements of each wrist. When the animals learned the association between cursor movement and wrist movement, we established a start and a goal in the maze, and asked them to find a path between them. We found that the animals took the shortest pathway, rather than approaching the goal randomly. We further found that the animals adopted a strategy of selecting a fixed intermediate point in the visually presented maze to select one of the shortest pathways, suggesting a visually based path planning. To examine their capacity to use that strategy flexibly, we transformed the task by blocking pathways in the maze, providing a problem to solve. The animals then developed a strategy of solving the problem by planning a novel shortest path from the start to the goal and rerouting the path to bypass the obstacle.
Wensveen, Paul J; Thomas, Len; Miller, Patrick J O
2015-01-01
Detailed information about animal location and movement is often crucial in studies of natural behaviour and how animals respond to anthropogenic activities. Dead-reckoning can be used to infer such detailed information, but without additional positional data this method results in uncertainty that grows with time. Combining dead-reckoning with new Fastloc-GPS technology should provide good opportunities for reconstructing georeferenced fine-scale tracks, and should be particularly useful for marine animals that spend most of their time under water. We developed a computationally efficient, Bayesian state-space modelling technique to estimate humpback whale locations through time, integrating dead-reckoning using on-animal sensors with measurements of whale locations using on-animal Fastloc-GPS and visual observations. Positional observation models were based upon error measurements made during calibrations. High-resolution 3-dimensional movement tracks were produced for 13 whales using a simple process model in which errors caused by water current movements, non-location sensor errors, and other dead-reckoning errors were accumulated into a combined error term. Positional uncertainty quantified by the track reconstruction model was much greater for tracks with visual positions and few or no GPS positions, indicating a strong benefit to using Fastloc-GPS for track reconstruction. Compared to tracks derived only from position fixes, the inclusion of dead-reckoning data greatly improved the level of detail in the reconstructed tracks of humpback whales. Using cross-validation, a clear improvement in the predictability of out-of-set Fastloc-GPS data was observed compared to more conventional track reconstruction methods. Fastloc-GPS observation errors during calibrations were found to vary by number of GPS satellites received and by orthogonal dimension analysed; visual observation errors varied most by distance to the whale. By systematically accounting for the observation errors in the position fixes, our model provides a quantitative estimate of location uncertainty that can be appropriately incorporated into analyses of animal movement. This generic method has potential application for a wide range of marine animal species and data recording systems.
Evolutionary relevance facilitates visual information processing.
Jackson, Russell E; Calvillo, Dusti P
2013-11-03
Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.
Correlative visualization techniques for multidimensional data
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Goettsche, Craig
1989-01-01
Critical to the understanding of data is the ability to provide pictorial or visual representation of those data, particularly in support of correlative data analysis. Despite the advancement of visualization techniques for scientific data over the last several years, there are still significant problems in bringing today's hardware and software technology into the hands of the typical scientist. For example, there are other computer science domains outside of computer graphics that are required to make visualization effective such as data management. Well-defined, flexible mechanisms for data access and management must be combined with rendering algorithms, data transformation, etc. to form a generic visualization pipeline. A generalized approach to data visualization is critical for the correlative analysis of distinct, complex, multidimensional data sets in the space and Earth sciences. Different classes of data representation techniques must be used within such a framework, which can range from simple, static two- and three-dimensional line plots to animation, surface rendering, and volumetric imaging. Static examples of actual data analyses will illustrate the importance of an effective pipeline in data visualization system.
Are visual peripheries forever young?
Burnat, Kalina
2015-01-01
The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.
Stanger-Hall, Kathrin F; Lloyd, James E
2015-03-01
Animal communication is an intriguing topic in evolutionary biology. In this comprehensive study of visual signal evolution, we used a phylogenetic approach to study the evolution of the flash communication system of North American fireflies. The North American firefly genus Photinus contains 35 described species with simple ON-OFF visual signals, and information on habitat types, sympatric congeners, and predators. This makes them an ideal study system to test hypotheses on the evolution of male and female visual signal traits. Our analysis of 34 Photinus species suggests two temporal pattern generators: one for flash duration and one for flash intervals. Reproductive character displacement was a main factor for signal divergence in male flash duration among sympatric Photinus species. Male flash pattern intervals (i.e., the duration of the dark periods between signals) were positively correlated with the number of sympatric Photuris fireflies, which include predators of Photinus. Females of different Photinus species differ in their response preferences to male traits. As in other communication systems, firefly male sexual signals seem to be a compromise between optimizing mating success (sexual selection) and minimizing predation risk (natural selection). An integrative model for Photinus signal evolution is proposed. © 2015 The Author(s).
Visualizing the spinal neuronal dynamics of locomotion
NASA Astrophysics Data System (ADS)
Subramanian, Kalpathi R.; Bashor, D. P.; Miller, M. T.; Foster, J. A.
2004-06-01
Modern imaging and simulation techniques have enhanced system-level understanding of neural function. In this article, we present an application of interactive visualization to understanding neuronal dynamics causing locomotion of a single hip joint, based on pattern generator output of the spinal cord. Our earlier work visualized cell-level responses of multiple neuronal populations. However, the spatial relationships were abstract, making communication with colleagues difficult. We propose two approaches to overcome this: (1) building a 3D anatomical model of the spinal cord with neurons distributed inside, animated by the simulation and (2) adding limb movements predicted by neuronal activity. The new system was tested using a cat walking central pattern generator driving a pair of opposed spinal motoneuron pools. Output of opposing motoneuron pools was combined into a single metric, called "Net Neural Drive", which generated angular limb movement in proportion to its magnitude. Net neural drive constitutes a new description of limb movement control. The combination of spatial and temporal information in the visualizations elegantly conveys the neural activity of the output elements (motoneurons), as well as the resulting movement. The new system encompasses five biological levels of organization from ion channels to observed behavior. The system is easily scalable, and provides an efficient interactive platform for rapid hypothesis testing.
ERIC Educational Resources Information Center
Lin, Huifen
2011-01-01
The purpose of this study was to investigate the relative effectiveness of different types of visuals (static and animated) and instructional strategies (no strategy, questions, and questions plus feedback) used to complement visualized materials on students' learning of different educational objectives in a computer-based instructional (CBI)…
How the Human Brain Represents Perceived Dangerousness or “Predacity” of Animals
Sha, Long; Guntupalli, J. Swaroop; Oosterhof, Nikolaas; Halchenko, Yaroslav O.; Nastase, Samuel A.; di Oleggio Castello, Matteo Visconti; Abdi, Hervé; Jobst, Barbara C.; Gobbini, M. Ida; Haxby, James V.
2016-01-01
Common or folk knowledge about animals is dominated by three dimensions: (1) level of cognitive complexity or “animacy;” (2) dangerousness or “predacity;” and (3) size. We investigated the neural basis of the perceived dangerousness or aggressiveness of animals, which we refer to more generally as “perception of threat.” Using functional magnetic resonance imaging (fMRI), we analyzed neural activity evoked by viewing images of animal categories that spanned the dissociable semantic dimensions of threat and taxonomic class. The results reveal a distributed network for perception of threat extending along the right superior temporal sulcus. We compared neural representational spaces with target representational spaces based on behavioral judgments and a computational model of early vision and found a processing pathway in which perceived threat emerges as a dominant dimension: whereas visual features predominate in early visual cortex and taxonomy in lateral occipital and ventral temporal cortices, these dimensions fall away progressively from posterior to anterior temporal cortices, leaving threat as the dominant explanatory variable. Our results suggest that the perception of threat in the human brain is associated with neural structures that underlie perception and cognition of social actions and intentions, suggesting a broader role for these regions than has been thought previously, one that includes the perception of potential threat from agents independent of their biological class. SIGNIFICANCE STATEMENT For centuries, philosophers have wondered how the human mind organizes the world into meaningful categories and concepts. Today this question is at the core of cognitive science, but our focus has shifted to understanding how knowledge manifests in dynamic activity of neural systems in the human brain. This study advances the young field of empirical neuroepistemology by characterizing the neural systems engaged by an important dimension in our cognitive representation of the animal kingdom ontological subdomain: how the brain represents the perceived threat, dangerousness, or “predacity” of animals. Our findings reveal how activity for domain-specific knowledge of animals overlaps the social perception networks of the brain, suggesting domain-general mechanisms underlying the representation of conspecifics and other animals. PMID:27170133
The Franco-American macaque experiment. [bone demineralization of monkeys on Space Shuttle
NASA Technical Reports Server (NTRS)
Cipriano, Leonard F.; Ballard, Rodney W.
1988-01-01
The details of studies to be carried out jointly by French and American teams on two rhesus monkeys prepared for future experiments aboard the Space Shuttle are discussed together with the equipment involved. Seven science discipline teams were formed, which will study the effects of flight and/or weightlessness on the bone and calcium metabolism, the behavior, the cardiovascular system, the fluid balance and electrolytes, the muscle system, the neurovestibular interactions, and the sleep/biorhythm cycles. New behavioral training techniques were developed, in which the animals were trained to respond to behavioral tasks in order to measure the parameters involving eye/hand coordination, the response time to target tracking, visual discrimination, and muscle forces used by the animals. A large data set will be obtained from different animals on the two to three Space Shuttle flights; the hardware technologies developed for these experiments will be applied for primate experiments on the Space Station.
Predicting Lameness in Sheep Activity Using Tri-Axial Acceleration Signals
Barwick, Jamie; Lamb, David; Dobos, Robin; Schneider, Derek; Welch, Mitchell; Trotter, Mark
2018-01-01
Simple Summary Monitoring livestock farmed under extensive conditions is challenging and this is particularly difficult when observing animal behaviour at an individual level. Lameness is a disease symptom that has traditionally relied on visual inspection to detect those animals with an abnormal walking pattern. More recently, accelerometer sensors have been used in other livestock industries to detect lame animals. These devices are able to record changes in activity intensity, allowing us to differentiate between a grazing, walking, and resting animal. Using these on-animal sensors, grazing, standing, walking, and lame walking were accurately detected from an ear attached sensor. With further development, this classification algorithm could be linked with an automatic livestock monitoring system to provide real time information on individual health status, something that is practically not possible under current extensive livestock production systems. Abstract Lameness is a clinical symptom associated with a number of sheep diseases around the world, having adverse effects on weight gain, fertility, and lamb birth weight, and increasing the risk of secondary diseases. Current methods to identify lame animals rely on labour intensive visual inspection. The aim of this current study was to determine the ability of a collar, leg, and ear attached tri-axial accelerometer to discriminate between sound and lame gait movement in sheep. Data were separated into 10 s mutually exclusive behaviour epochs and subjected to Quadratic Discriminant Analysis (QDA). Initial analysis showed the high misclassification of lame grazing events with sound grazing and standing from all deployment modes. The final classification model, which included lame walking and all sound activity classes, yielded a prediction accuracy for lame locomotion of 82%, 35%, and 87% for the ear, collar, and leg deployments, respectively. Misclassification of sound walking with lame walking within the leg accelerometer dataset highlights the superiority of an ear mode of attachment for the classification of lame gait characteristics based on time series accelerometer data. PMID:29324700
Takacs, Zsofia K.; Bus, Adriana G.
2016-01-01
The present study provides experimental evidence regarding 4–6-year-old children’s visual processing of animated versus static illustrations in storybooks. Thirty nine participants listened to an animated and a static book, both three times, while eye movements were registered with an eye-tracker. Outcomes corroborate the hypothesis that specifically motion is what attracts children’s attention while looking at illustrations. It is proposed that animated illustrations that are well matched to the text of the story guide children to those parts of the illustration that are important for understanding the story. This may explain why animated books resulted in better comprehension than static books. PMID:27790183
Jellema, Tjeerd; Maassen, Gerard; Perrett, David I
2004-07-01
This study investigated the cellular mechanisms in the anterior part of the superior temporal sulcus (STSa) that underlie the integration of different features of the same visually perceived animate object. Three visual features were systematically manipulated: form, motion and location. In 58% of a population of cells selectively responsive to the sight of a walking agent, the location of the agent significantly influenced the cell's response. The influence of position was often evident in intricate two- and three-way interactions with the factors form and/or motion. For only one of the 31 cells tested, the response could be explained by just a single factor. For all other cells at least two factors, and for half of the cells (52%) all three factors, played a significant role in controlling responses. Our findings support a reformulation of the Ungerleider and Mishkin model, which envisages a subdivision of the visual processing into a ventral 'what' and a dorsal 'where' stream. We demonstrated that at least part of the temporal cortex ('what' stream) makes ample use of visual spatial information. Our findings open up the prospect of a much more elaborate integration of visual properties of animate objects at the single cell level. Such integration may support the comprehension of animals and their actions.
Program Aids Visualization Of Data
NASA Technical Reports Server (NTRS)
Truong, L. V.
1995-01-01
Living Color Frame System (LCFS) computer program developed to solve some problems that arise in connection with generation of real-time graphical displays of numerical data and of statuses of systems. Need for program like LCFS arises because computer graphics often applied for better understanding and interpretation of data under observation and these graphics become more complicated when animation required during run time. Eliminates need for custom graphical-display software for application programs. Written in Turbo C++.
A framework for visualization of battlefield network behavior
NASA Astrophysics Data System (ADS)
Perzov, Yury; Yurcik, William
2006-05-01
An extensible network simulation application was developed to study wireless battlefield communications. The application monitors node mobility and depicts broadcast and unicast traffic as expanding rings and directed links. The network simulation was specially designed to support fault injection to show the impact of air strikes on disabling nodes. The application takes standard ns-2 trace files as an input and provides for performance data output in different graphical forms (histograms and x/y plots). Network visualization via animation of simulation output can be saved in AVI format that may serve as a basis for a real-time battlefield awareness system.
New NASA 3D Animation Shows Seven Days of Simulated Earth Weather
2014-08-11
This visualization shows early test renderings of a global computational model of Earth's atmosphere based on data from NASA's Goddard Earth Observing System Model, Version 5 (GEOS-5). This particular run, called Nature Run 2, was run on a supercomputer, spanned 2 years of simulation time at 30 minute intervals, and produced Petabytes of output. The visualization spans a little more than 7 days of simulation time which is 354 time steps. The time period was chosen because a simulated category-4 typhoon developed off the coast of China. The 7 day period is repeated several times during the course of the visualization. Credit: NASA's Scientific Visualization Studio Read more or download here: svs.gsfc.nasa.gov/goto?4180 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Cortico-fugal output from visual cortex promotes plasticity of innate motor behaviour
Liu, Bao-hua; Huberman, Andrew D.; Scanziani, Massimo
2017-01-01
The mammalian visual cortex massively innervates the brainstem, a phylogenetically older structure, via cortico-fugal axonal projections1. Many cortico-fugal projections target brainstem nuclei that mediate innate motor behaviours, but the function of these projections remains poorly understood1–4. A prime example of such behaviours is the optokinetic reflex (OKR), an innate eye movement mediated by the brainstem accessory optic system3,5,6, that stabilizes images on the retina as the animal moves through the environment and is thus crucial for vision5. The OKR is plastic, allowing the amplitude of this reflex to be adaptively adjusted relative to other oculomotor reflexes and thereby ensuring image stability throughout life7–11. Although the plasticity of the OKR is thought to involve subcortical structures such as the cerebellum and vestibular nuclei10–13, cortical lesions have suggested that the visual cortex might also be involved9,14,15. Here we show that projections from the mouse visual cortex to the accessory optic system promote the adaptive plasticity of the OKR. OKR potentiation, a compensatory plastic increase in the amplitude of the OKR in response to vestibular impairment11,16–18, is diminished by silencing visual cortex. Furthermore, targeted ablation of a sparse population of cortico-fugal neurons that specifically project to the accessory optic system severely impairs OKR potentiation. Finally, OKR potentiation results from an enhanced drive exerted by the visual cortex onto the accessory optic system. Thus, cortico-fugal projections to the brainstem enable the visual cortex, an area that has been principally studied for its sensory processing function19, to plastically adapt the execution of innate motor behaviours. PMID:27732573
A simpler primate brain: the visual system of the marmoset monkey
Solomon, Samuel G.; Rosa, Marcello G. P.
2014-01-01
Humans are diurnal primates with high visual acuity at the center of gaze. Although primates share many similarities in the organization of their visual centers with other mammals, and even other species of vertebrates, their visual pathways also show unique features, particularly with respect to the organization of the cerebral cortex. Therefore, in order to understand some aspects of human visual function, we need to study non-human primate brains. Which species is the most appropriate model? Macaque monkeys, the most widely used non-human primates, are not an optimal choice in many practical respects. For example, much of the macaque cerebral cortex is buried within sulci, and is therefore inaccessible to many imaging techniques, and the postnatal development and lifespan of macaques are prohibitively long for many studies of brain maturation, plasticity, and aging. In these and several other respects the marmoset, a small New World monkey, represents a more appropriate choice. Here we review the visual pathways of the marmoset, highlighting recent work that brings these advantages into focus, and identify where additional work needs to be done to link marmoset brain organization to that of macaques and humans. We will argue that the marmoset monkey provides a good subject for studies of a complex visual system, which will likely allow an important bridge linking experiments in animal models to humans. PMID:25152716
Advances and limitations of visual conditioning protocols in harnessed bees.
Avarguès-Weber, Aurore; Mota, Theo
2016-10-01
Bees are excellent invertebrate models for studying visual learning and memory mechanisms, because of their sophisticated visual system and impressive cognitive capacities associated with a relatively simple brain. Visual learning in free-flying bees has been traditionally studied using an operant conditioning paradigm. This well-established protocol, however, can hardly be combined with invasive procedures for studying the neurobiological basis of visual learning. Different efforts have been made to develop protocols in which harnessed honey bees could associate visual cues with reinforcement, though learning performances remain poorer than those obtained with free-flying animals. Especially in the last decade, the intention of improving visual learning performances of harnessed bees led many authors to adopt distinct visual conditioning protocols, altering parameters like harnessing method, nature and duration of visual stimulation, number of trials, inter-trial intervals, among others. As a result, the literature provides data hardly comparable and sometimes contradictory. In the present review, we provide an extensive analysis of the literature available on visual conditioning of harnessed bees, with special emphasis on the comparison of diverse conditioning parameters adopted by different authors. Together with this comparative overview, we discuss how these diverse conditioning parameters could modulate visual learning performances of harnessed bees. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ashtari, Manzar; Zhang, Hui; Cook, Philip A.; Cyckowski, Laura L.; Shindler, Kenneth S.; Marshall, Kathleen A.; Aravand, Puya; Vossough, Arastoo; Gee, James C.; Maguire, Albert M.; Baker, Chris I.; Bennett, Jean
2015-01-01
Much of our knowledge of the mechanisms underlying plasticity in the visual cortex in response to visual impairment, vision restoration, and environmental interactions comes from animal studies. We evaluated human brain plasticity in a group of patients with Leber’s congenital amaurosis (LCA), who regained vision through gene therapy. Using non-invasive multimodal neuroimaging methods, we demonstrated that reversing blindness with gene therapy promoted long-term structural plasticity in the visual pathways emanating from the treated retina of LCA patients. The data revealed improvements and normalization along the visual fibers corresponding to the site of retinal injection of the gene therapy vector carrying the therapeutic gene in the treated eye compared to the visual pathway for the untreated eye of LCA patients. After gene therapy, the primary visual pathways (for example, geniculostriate fibers) in the treated retina were similar to those of sighted control subjects, whereas the primary visual pathways of the untreated retina continued to deteriorate. Our results suggest that visual experience, enhanced by gene therapy, may be responsible for the reorganization and maturation of synaptic connectivity in the visual pathways of the treated eye in LCA patients. The interactions between the eye and the brain enabled improved and sustained long-term visual function in patients with LCA after gene therapy. PMID:26180100
Learning Protein Structure with Peers in an AR-Enhanced Learning Environment
ERIC Educational Resources Information Center
Chen, Yu-Chien
2013-01-01
Augmented reality (AR) is an interactive system that allows users to interact with virtual objects and the real world at the same time. The purpose of this dissertation was to explore how AR, as a new visualization tool, that can demonstrate spatial relationships by representing three dimensional objects and animations, facilitates students to…
Scene perception and the visual control of travel direction in navigating wood ants
Collett, Thomas S.; Lent, David D.; Graham, Paul
2014-01-01
This review reflects a few of Mike Land's many and varied contributions to visual science. In it, we show for wood ants, as Mike has done for a variety of animals, including readers of this piece, what can be learnt from a detailed analysis of an animal's visually guided eye, head or body movements. In the case of wood ants, close examination of their body movements, as they follow visually guided routes, is starting to reveal how they perceive and respond to their visual world and negotiate a path within it. We describe first some of the mechanisms that underlie the visual control of their paths, emphasizing that vision is not the ant's only sense. In the second part, we discuss how remembered local shape-dependent and global shape-independent features of a visual scene may interact in guiding the ant's path. PMID:24395962
Exploratory visualization of earth science data in a Semantic Web context
NASA Astrophysics Data System (ADS)
Ma, X.; Fox, P. A.
2012-12-01
Earth science data are increasingly unlocked from their local 'safes' and shared online with the global science community as well as the average citizen. The European Union (EU)-funded project OneGeology-Europe (1G-E, www.onegeology-europe.eu) is a typical project that promotes works in that direction. The 1G-E web portal provides easy access to distributed geological data resources across participating EU member states. Similar projects can also be found in other countries or regions, such as the geoscience information network USGIN (www.usgin.org) in United States, the groundwater information network GIN-RIES (www.gw-info.net) in Canada and the earth science infrastructure AuScope (www.auscope.org.au) in Australia. While data are increasingly made available online, we currently face a shortage of tools and services that support information and knowledge discovery with such data. One reason is that earth science data are recorded in professional language and terms, and people without background knowledge cannot understand their meanings well. The Semantic Web provides a new context to help computers as well as users to better understand meanings of data and conduct applications. In this study we aim to chain up Semantic Web technologies (e.g., vocabularies/ontologies and reasoning), data visualization (e.g., an animation underpinned by an ontology) and online earth science data (e.g., available as Web Map Service) to develop functions for information and knowledge discovery. We carried out a case study with data of the 1G-E project. We set up an ontology of geological time scale using the encoding languages of SKOS (Simple Knowledge Organization System) and OWL (Web Ontology Language) from W3C (World Wide Web Consortium, www.w3.org). Then we developed a Flash animation of geological time scale by using the ActionScript language. The animation is underpinned by the ontology and the interrelationships between concepts of geological time scale are visualized in the animation. We linked the animation and the ontology to the online geological data of 1G-E project and developed interactive applications. The animation was used to show legends of rock age layers in geological maps dynamically. In turn, these legends were used as control panels to filter out and generalize geospatial features of certain rock ages on map layers. We tested the functions with maps of various EU member states. As a part of the initial results, legends for rock age layers of EU individual national maps were generated respectively, and the functions for filtering and generalization were examined with the map of United Kingdom. Though new challenges are rising in the tests, like those caused by synonyms (e.g., 'Lower Cambrian' and 'Terreneuvian'), the initial results achieved the designed goals of information and knowledge discovery by using the ontology-underpinned animation. This study shows that (1) visualization lowers the barrier of ontologies, (2) integrating ontologies and visualization adds value to online earth science data services, and (3) exploratory visualization supports the procedure of data processing as well as the display of results.
Animating streamlines with repeated asymmetric patterns for steady flow visualization
NASA Astrophysics Data System (ADS)
Yeh, Chih-Kuo; Liu, Zhanping; Lee, Tong-Yee
2012-01-01
Animation provides intuitive cueing for revealing essential spatial-temporal features of data in scientific visualization. This paper explores the design of Repeated Asymmetric Patterns (RAPs) in animating evenly-spaced color-mapped streamlines for dense accurate visualization of complex steady flows. We present a smooth cyclic variable-speed RAP animation model that performs velocity (magnitude) integral luminance transition on streamlines. This model is extended with inter-streamline synchronization in luminance varying along the tangential direction to emulate orthogonal advancing waves from a geometry-based flow representation, and then with evenly-spaced hue differing in the orthogonal direction to construct tangential flow streaks. To weave these two mutually dual sets of patterns, we propose an energy-decreasing strategy that adopts an iterative yet efficient procedure for determining the luminance phase and hue of each streamline in HSL color space. We also employ adaptive luminance interleaving in the direction perpendicular to the flow to increase the contrast between streamlines.
Do cattle (Bos taurus) retain an association of a visual cue with a food reward for a year?
Hirata, Masahiko; Takeno, Nozomi
2014-06-01
Use of visual cues to locate specific food resources from a distance is a critical ability of animals foraging in a spatially heterogeneous environment. However, relatively little is known about how long animals can retain the learned cue-reward association without reinforcement. We compared feeding behavior of experienced and naive Japanese Black cows (Bos taurus) in discovering food locations in a pasture. Experienced animals had been trained to respond to a visual cue (plastic washtub) for a preferred food (grain-based concentrate) 1 year prior to the experiment, while naive animals had no exposure to the cue. Cows were tested individually in a test arena including tubs filled with the concentrate on three successive days (Days 1-3). Experienced cows located the first tub more quickly and visited more tubs than naive cows on Day 1 (usually P < 0.05), but these differences disappeared on Days 2 and 3. The performance of experienced cows tended to increase from Day 1 to Day 2 and level off thereafter. Our results suggest that Japanese Black cows can associate a visual cue with a food reward within a day and retain the association for 1 year despite a slight decay. © 2014 Japanese Society of Animal Science.
Solar System Symphony: Combining astronomy with live classical music
NASA Astrophysics Data System (ADS)
Kremer, Kyle; WorldWide Telescope
2017-01-01
Solar System Symphony is an educational outreach show which combines astronomy visualizations and live classical music. As musicians perform excerpts from Holst’s “The Planets” and other orchestral works, visualizations developed using WorldWide Telescope and NASA images and animations are projected on-stage. Between each movement of music, a narrator guides the audience through scientific highlights of the solar system. The content of Solar System Symphony is geared toward a general audience, particularly targeting K-12 students. The hour-long show not only presents a new medium for exposing a broad audience to astronomy, but also provides universities an effective tool for facilitating interdisciplinary collaboration between two divergent fields. The show was premiered at Northwestern University in May 2016 in partnership with Northwestern’s Bienen School of Music and was recently performed at the Colburn Conservatory of Music in November 2016.
Numerical cognition is resilient to dramatic changes in early sensory experience.
Kanjlia, Shipra; Feigenson, Lisa; Bedny, Marina
2018-06-20
Humans and non-human animals can approximate large visual quantities without counting. The approximate number representations underlying this ability are noisy, with the amount of noise proportional to the quantity being represented. Numerate humans also have access to a separate system for representing exact quantities using number symbols and words; it is this second, exact system that supports most of formal mathematics. Although numerical approximation abilities and symbolic number abilities are distinct in representational format and in their phylogenetic and ontogenetic histories, they appear to be linked throughout development--individuals who can more precisely discriminate quantities without counting are better at math. The origins of this relationship are debated. On the one hand, symbolic number abilities may be directly linked to, perhaps even rooted in, numerical approximation abilities. On the other hand, the relationship between the two systems may simply reflect their independent relationships with visual abilities. To test this possibility, we asked whether approximate number and symbolic math abilities are linked in congenitally blind individuals who have never experienced visual sets or used visual strategies to learn math. Congenitally blind and blind-folded sighted participants completed an auditory numerical approximation task, as well as a symbolic arithmetic task and non-math control tasks. We found that the precision of approximate number representations was identical across congenitally blind and sighted groups, suggesting that the development of the Approximate Number System (ANS) does not depend on visual experience. Crucially, the relationship between numerical approximation and symbolic math abilities is preserved in congenitally blind individuals. These data support the idea that the Approximate Number System and symbolic number abilities are intrinsically linked, rather than indirectly linked through visual abilities. Copyright © 2018. Published by Elsevier B.V.
The effect of animation on learning action symbols by individuals with intellectual disabilities.
Fujisawa, Kazuko; Inoue, Tomoyoshi; Yamana, Yuko; Hayashi, Humirhiro
2011-03-01
The purpose of the present study was to investigate whether participants with intellectual impairments could benefit from the movement associated with animated pictures while they were learning symbol names. Sixteen school students, whose linguistic-developmental age ranged from 38?91 months, participated in the experiment. They were taught 16 static visual symbols and the corresponding action words (naming task) in two sessions conducted one week apart. In the experimental condition, animation was employed to facilitate comprehension, whereas no animation was used in the control condition. Enhancement of learning was shown in the experimental condition, suggesting that the participants benefited from animated symbols. Furthermore, it was found that the lower the linguistic developmental age, the more effective the animated cue was in learning static visual symbols.
Classical Cosmology Through Animation Stories
NASA Astrophysics Data System (ADS)
Mijic, Milan; Kang, E. Y. E.; Longson, T.; State LA SciVi Project, Cal
2010-05-01
Computer animations are a powerful tool for explanation and communication of ideas, especially to a younger generation. Our team completed a three part sequence of short, computer animated stories about the insight and discoveries that lead to the understanding of the overall structure of the universe. Our principal characters are Immanuel Kant, Henrietta Leavitt, and Edwin Hubble. We utilized animations to model and visualize the physical concepts behind each discovery and to recreate the characters, locations, and flavor of the time. The animations vary in length from 6 to 11 minutes. The instructors or presenters may wish to utilize them separately or together. The animations may be used for learning classical cosmology in a visual way in GE astronomy courses, in pre-college science classes, or in public science education setting.
Valerio, Stephane; Clark, Benjamin J.; Chan, Jeremy H. M.; Frost, Carlton P.; Harris, Mark J.; Taube, Jeffrey S.
2010-01-01
Previous studies have identified neurons throughout the rat limbic system that fire as a function of the animal's head direction (HD). This HD signal is particularly robust when rats locomote in the horizontal and vertical planes, but is severely attenuated when locomoting upside-down (Calton & Taube, 2005). Given the hypothesis that the HD signal represents an animal's sense of its directional heading, we evaluated whether rats could accurately navigate in an inverted (upside-down) orientation. The task required the animals to find an escape hole while locomoting inverted on a circular platform suspended from the ceiling. In experiment 1, Long-Evans rats were trained to navigate to the escape hole by locomoting from either one or four start points. Interestingly, no animals from the 4-start point group reached criterion, even after 30 days of training. Animals in the 1-start point group reached criterion after about 6 training sessions. In Experiment 2, probe tests revealed that animals navigating from either 1- or 2-start points utilized distal visual landmarks for accurate orientation. However, subsequent probe tests revealed that their performance was markedly attenuated when required to navigate to the escape hole from a novel starting point. This absence of flexibility while navigating upside-down was confirmed in experiment 3 where we show that the rats do not learn to reach a place, but instead learn separate trajectories to the target hole(s). Based on these results we argue that inverted navigation primarily involves a simple directional strategy based on visual landmarks. PMID:20109566
Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.
Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James
2016-03-21
Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.
It’s Not Easy Being Blue: Are There Olfactory and Visual Trade-Offs in Plant Signalling?
Valenta, Kim; Brown, Kevin A.; Melin, Amanda D.; Monckton, Spencer K.; Styler, Sarah A.; Jackson, Derek A.; Chapman, Colin A.
2015-01-01
Understanding the signals used by plants to attract seed disperses is a pervasive quest in evolutionary and sensory biology. Fruit size, colour, and odour variation have long been discussed in the controversial context of dispersal syndromes targeting olfactory-oriented versus visually-oriented foragers. Trade-offs in signal investment could impose important physiological constraints on plants, yet have been largely ignored. Here, we measure the reflectance and volatile organic compounds of a community of Malagasy plants and our results indicate that extant plant signals may represent a trade-off between olfactory and chromatic signals. Blue pigments are the most visually-effective – blue is a colour that is visually salient to all known seed dispersing animals within the study system. Additionally, plants with blue-reflecting fruits are less odiferous than plants that reflect primarily in other regions of the colour spectrum. PMID:26115040
The onset of visual experience gates auditory cortex critical periods
Mowery, Todd M.; Kotak, Vibhakar C.; Sanes, Dan H.
2016-01-01
Sensory systems influence one another during development and deprivation can lead to cross-modal plasticity. As auditory function begins before vision, we investigate the effect of manipulating visual experience during auditory cortex critical periods (CPs) by assessing the influence of early, normal and delayed eyelid opening on hearing loss-induced changes to membrane and inhibitory synaptic properties. Early eyelid opening closes the auditory cortex CPs precociously and dark rearing prevents this effect. In contrast, delayed eyelid opening extends the auditory cortex CPs by several additional days. The CP for recovery from hearing loss is also closed prematurely by early eyelid opening and extended by delayed eyelid opening. Furthermore, when coupled with transient hearing loss that animals normally fully recover from, very early visual experience leads to inhibitory deficits that persist into adulthood. Finally, we demonstrate a functional projection from the visual to auditory cortex that could mediate these effects. PMID:26786281
ERIC Educational Resources Information Center
Kartiko, Iwan; Kavakli, Manolya; Cheng, Ken
2010-01-01
As the technology in computer graphics advances, Animated-Virtual Actors (AVAs) in Virtual Reality (VR) applications become increasingly rich and complex. Cognitive Theory of Multimedia Learning (CTML) suggests that complex visual materials could hinder novice learners from attending to the lesson properly. On the other hand, previous studies have…
ERIC Educational Resources Information Center
Al-Balushi, Sulaiman M.; Al-Hajri, Sheikha H.
2014-01-01
The purpose of the current study is to explore the impact of associating animations with concrete models on eleventh-grade students' comprehension of different visual representations in organic chemistry. The study used a post-test control group quasi-experimental design. The experimental group (N = 28) used concrete models, submicroscopic…
ERIC Educational Resources Information Center
Penkunas, Michael J.; Coss, Richard G.
2013-01-01
Recent studies indicate that young children preferentially attend to snakes, spiders, and lions compared with nondangerous species, but these results have yet to be replicated in populations that actually experience dangerous animals in nature. This multi-site study investigated the visual-detection biases of southern Indian children towards two…
Donald E. Zimmerman; Carol Akerelrea; Jane Kapler Smith; Garrett J. O' Keefe
2006-01-01
Natural-resource managers have used a variety of computer-mediated presentation methods to communicate management practices to diverse publics. We explored the effects of visualizing and animating predictions from mathematical models in computerized presentations explaining forest succession (forest growth and change through time), fire behavior, and management options...
A Web-Based Visualization and Animation Platform for Digital Logic Design
ERIC Educational Resources Information Center
Shoufan, Abdulhadi; Lu, Zheng; Huss, Sorin A.
2015-01-01
This paper presents a web-based education platform for the visualization and animation of the digital logic design process. This includes the design of combinatorial circuits using logic gates, multiplexers, decoders, and look-up-tables as well as the design of finite state machines. Various configurations of finite state machines can be selected…
Renoult, J P; Thomann, M; Schaefer, H M; Cheptou, P-O
2013-11-01
Even though the importance of selection for trait evolution is well established, we still lack a functional understanding of the mechanisms underlying phenotypic selection. Because animals necessarily use their sensory system to perceive phenotypic traits, the model of sensory bias assumes that sensory systems are the main determinant of signal evolution. Yet, it has remained poorly known how sensory systems contribute to shaping the fitness surface of selected individuals. In a greenhouse experiment, we quantified the strength and direction of selection on floral coloration in a population of cornflowers exposed to bumblebees as unique pollinators during 4 days. We detected significant selection on the chromatic and achromatic (brightness) components of floral coloration. We then studied whether these patterns of selection are explicable by accounting for the visual system of the pollinators. Using data on bumblebee colour vision, we first showed that bumblebees should discriminate among quantitative colour variants. The observed selection was then compared to the selection predicted by psychophysical models of bumblebee colour vision. The achromatic but not the chromatic channel of the bumblebee's visual system could explain the observed pattern of selection. These results highlight that (i) pollinators can select quantitative variation in floral coloration and could thus account for a gradual evolution of flower coloration, and (ii) stimulation of the visual system represents, at least partly, a functional mechanism potentially explaining pollinators' selection on floral colour variants. © 2013 The Authors. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.
Vision in Flies: Measuring the Attention Span
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
A visual stimulus at a particular location of the visual field may elicit a behavior while at the same time equally salient stimuli in other parts do not. This property of visual systems is known as selective visual attention (SVA). The animal is said to have a focus of attention (FoA) which it has shifted to a particular location. Visual attention normally involves an attention span at the location to which the FoA has been shifted. Here the attention span is measured in Drosophila. The fly is tethered and hence has its eyes fixed in space. It can shift its FoA internally. This shift is revealed using two simultaneous test stimuli with characteristic responses at their particular locations. In tethered flight a wild type fly keeps its FoA at a certain location for up to 4s. Flies with a mutation in the radish gene, that has been suggested to be involved in attention-like mechanisms, display a reduced attention span of only 1s. PMID:26848852
Vision in Flies: Measuring the Attention Span.
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
A visual stimulus at a particular location of the visual field may elicit a behavior while at the same time equally salient stimuli in other parts do not. This property of visual systems is known as selective visual attention (SVA). The animal is said to have a focus of attention (FoA) which it has shifted to a particular location. Visual attention normally involves an attention span at the location to which the FoA has been shifted. Here the attention span is measured in Drosophila. The fly is tethered and hence has its eyes fixed in space. It can shift its FoA internally. This shift is revealed using two simultaneous test stimuli with characteristic responses at their particular locations. In tethered flight a wild type fly keeps its FoA at a certain location for up to 4s. Flies with a mutation in the radish gene, that has been suggested to be involved in attention-like mechanisms, display a reduced attention span of only 1s.
CTViz: A tool for the visualization of transport in nanocomposites.
Beach, Benjamin; Brown, Joshua; Tarlton, Taylor; Derosa, Pedro A
2016-05-01
A visualization tool (CTViz) for charge transport processes in 3-D hybrid materials (nanocomposites) was developed, inspired by the need for a graphical application to assist in code debugging and data presentation of an existing in-house code. As the simulation code grew, troubleshooting problems grew increasingly difficult without an effective way to visualize 3-D samples and charge transport in those samples. CTViz is able to produce publication and presentation quality visuals of the simulation box, as well as static and animated visuals of the paths of individual carriers through the sample. CTViz was designed to provide a high degree of flexibility in the visualization of the data. A feature that characterizes this tool is the use of shade and transparency levels to highlight important details in the morphology or in the transport paths by hiding or dimming elements of little relevance to the current view. This is fundamental for the visualization of 3-D systems with complex structures. The code presented here provides these required capabilities, but has gone beyond the original design and could be used as is or easily adapted for the visualization of other particulate transport where transport occurs on discrete paths. Copyright © 2016 Elsevier Inc. All rights reserved.
Neural mechanisms of limb position estimation in the primate brain.
Shi, Ying; Buneo, Christopher A
2011-01-01
Understanding the neural mechanisms of limb position estimation is important both for comprehending the neural control of goal directed arm movements and for developing neuroprosthetic systems designed to replace lost limb function. Here we examined the role of area 5 of the posterior parietal cortex in estimating limb position based on visual and somatic (proprioceptive, efference copy) signals. Single unit recordings were obtained as monkeys reached to visual targets presented in a semi-immersive virtual reality environment. On half of the trials animals were required to maintain their limb position at these targets while receiving both visual and non-visual feedback of their arm position, while on the other trials visual feedback was withheld. When examined individually, many area 5 neurons were tuned to the position of the limb in the workspace but very few neurons modulated their firing rates based on the presence/absence of visual feedback. At the population level however decoding of limb position was somewhat more accurate when visual feedback was provided. These findings support a role for area 5 in limb position estimation but also suggest that visual signals regarding limb position are only weakly represented in this area, and only at the population level.
Strathearn, Lane; Kim, Sohye; Bastian, D Anthony; Jung, Jennifer; Iyengar, Udita; Martinez, Sheila; Goin-Kochel, Robin P; Fonagy, Peter
2018-05-01
Several studies have suggested that the neuropeptide oxytocin may enhance aspects of social communication in autism. Little is known, however, about its effects on nonsocial manifestations, such as restricted interests and repetitive behaviors. In the empathizing-systemizing theory of autism, social deficits are described along the continuum of empathizing ability, whereas nonsocial aspects are characterized in terms of an increased preference for patterned or rule-based systems, called systemizing. We therefore developed an automated eye-tracking task to test whether children and adolescents with autism spectrum disorder (ASD) compared to matched controls display a visual preference for more highly organized and structured (systemized) real-life images. Then, as part of a randomized, double-blind, placebo-controlled crossover study, we examined the effect of intranasal oxytocin on systemizing preferences in 16 male children with ASD, compared with 16 matched controls. Participants viewed 14 slides, each containing four related pictures (e.g., of people, animals, scenes, or objects) that differed primarily on the degree of systemizing. Visual systemizing preference was defined in terms of the fixation time and count for each image. Unlike control subjects who showed no gaze preference, individuals with ASD preferred to fixate on more highly systemized pictures. Intranasal oxytocin eliminated this preference in ASD participants, who now showed a similar response to control subjects on placebo. In contrast, control participants increased their visual preference for more systemized images after receiving oxytocin versus placebo. These results suggest that, in addition to its effects on social communication, oxytocin may play a role in some of the nonsocial manifestations of autism.
Fostering Kinship with Animals: Animal Portraiture in Humane Education
ERIC Educational Resources Information Center
Kalof, Linda; Zammit-Lucia, Joe; Bell, Jessica; Granter, Gina
2016-01-01
Visual depictions of animals can alter human perceptions of, emotional responses to, and attitudes toward animals. Our study addressed the potential of a slideshow designed to activate emotional responses to animals to foster feelings of kinship with them. The personal meaning map measured changes in perceptions of animals. The participants were…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, D.B.; Grace, J.D.
1996-12-31
Petroleum system studies provide an ideal application for the combination of Geographic Information System (GIS) and multimedia technologies. GIS technology is used to build and maintain the spatial and tabular data within the study region. Spatial data may comprise the zones of active source rocks and potential reservoir facies. Similarly, tabular data include the attendant source rock parameters (e.g. pyroloysis results, organic carbon content) and field-level exploration and production histories for the basin. Once the spatial and tabular data base has been constructed, GIS technology is useful in finding favorable exploration trends, such as zones of high organic content, maturemore » source rocks in positions adjacent to sealed, high porosity reservoir facies. Multimedia technology provides powerful visualization tools for petroleum system studies. The components of petroleum system development, most importantly generation, migration and trap development typically span periods of tens to hundreds of millions of years. The ability to animate spatial data over time provides an insightful alternative for studying the development of processes which are only captured in {open_quotes}snapshots{close_quotes} by static maps. New multimedia-authoring software provides this temporal dimension. The ability to record this data on CD-ROMs and allow user- interactivity further leverages the combination of spatial data bases, tabular data bases and time-based animations. The example used for this study was the Bazhenov-Neocomian petroleum system of West Siberia.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, D.B.; Grace, J.D.
1996-01-01
Petroleum system studies provide an ideal application for the combination of Geographic Information System (GIS) and multimedia technologies. GIS technology is used to build and maintain the spatial and tabular data within the study region. Spatial data may comprise the zones of active source rocks and potential reservoir facies. Similarly, tabular data include the attendant source rock parameters (e.g. pyroloysis results, organic carbon content) and field-level exploration and production histories for the basin. Once the spatial and tabular data base has been constructed, GIS technology is useful in finding favorable exploration trends, such as zones of high organic content, maturemore » source rocks in positions adjacent to sealed, high porosity reservoir facies. Multimedia technology provides powerful visualization tools for petroleum system studies. The components of petroleum system development, most importantly generation, migration and trap development typically span periods of tens to hundreds of millions of years. The ability to animate spatial data over time provides an insightful alternative for studying the development of processes which are only captured in [open quotes]snapshots[close quotes] by static maps. New multimedia-authoring software provides this temporal dimension. The ability to record this data on CD-ROMs and allow user- interactivity further leverages the combination of spatial data bases, tabular data bases and time-based animations. The example used for this study was the Bazhenov-Neocomian petroleum system of West Siberia.« less
A Novel, Real-Time, In Vivo Mouse Retinal Imaging System.
Butler, Mark C; Sullivan, Jack M
2015-11-01
To develop an efficient, low-cost instrument for robust real-time imaging of the mouse retina in vivo, and assess system capabilities by evaluating various animal models. Following multiple disappointing attempts to visualize the mouse retina during a subretinal injection using commercially available systems, we identified the key limitation to be inadequate illumination due to off axis illumination and poor optical train optimization. Therefore, we designed a paraxial illumination system for Greenough-type stereo dissecting microscope incorporating an optimized optical launch and an efficiently coupled fiber optic delivery system. Excitation and emission filters control spectral bandwidth. A color coupled-charged device (CCD) camera is coupled to the microscope for image capture. Although, field of view (FOV) is constrained by the small pupil aperture, the high optical power of the mouse eye, and the long working distance (needed for surgical manipulations), these limitations can be compensated by eye positioning in order to observe the entire retina. The retinal imaging system delivers an adjustable narrow beam to the dilated pupil with minimal vignetting. The optic nerve, vasculature, and posterior pole are crisply visualized and the entire retina can be observed through eye positioning. Normal and degenerative retinal phenotypes can be followed over time. Subretinal or intraocular injection procedures are followed in real time. Real-time, intravenous fluorescein angiography for the live mouse has been achieved. A novel device is established for real-time viewing and image capture of the small animal retina during subretinal injections for preclinical gene therapy studies.
Dual-modality imaging of function and physiology
NASA Astrophysics Data System (ADS)
Hasegawa, Bruce H.; Iwata, Koji; Wong, Kenneth H.; Wu, Max C.; Da Silva, Angela; Tang, Hamilton R.; Barber, William C.; Hwang, Andrew B.; Sakdinawat, Anne E.
2002-04-01
Dual-modality imaging is a technique where computed tomography or magnetic resonance imaging is combined with positron emission tomography or single-photon computed tomography to acquire structural and functional images with an integrated system. The data are acquired during a single procedure with the patient on a table viewed by both detectors to facilitate correlation between the structural and function images. The resulting data can be useful for localization for more specific diagnosis of disease. In addition, the anatomical information can be used to compensate the correlated radionuclide data for physical perturbations such as photon attenuation, scatter radiation, and partial volume errors. Thus, dual-modality imaging provides a priori information that can be used to improve both the visual quality and the quantitative accuracy of the radionuclide images. Dual-modality imaging systems also are being developed for biological research that involves small animals. The small-animal dual-modality systems offer advantages for measurements that currently are performed invasively using autoradiography and tissue sampling. By acquiring the required data noninvasively, dual-modality imaging has the potential to allow serial studies in a single animal, to perform measurements with fewer animals, and to improve the statistical quality of the data.
Andersen, Douglas C.
1990-01-01
The influence of habitat patchiness and unpalatable plants on the search path of the plains pocket gopher (Geomys bursarius) was examined in outdoor enclosures. Separate experiments were used to evaluate how individual animals explored (by tunnel excavation) enclosures free of plants except for one or more dense patches of a palatable plant (Daucus carota), a dense patch of an unpalatable species (Pastinaca sativa) containing a few palatable plants (D. carota), or a relatively sparse mixture of palatable (D. carota) and unpalatable (Raphanus sativus) species. Only two of eight individuals tested showed the predicted pattern of concentrating search effort in patches of palatable plants. The maintenance of relatively high levels of effort in less profitable sites may reflect the security afforded food resources by the solitary social system and fossorial lifestyle of G. bursarius. Unpalatable plants repelled animals under some conditions, but search paths in the sparsely planted mixed-species treatment suggest animals can use visual or other cues to orient excavations. Evidence supporting area-restricted search was weak. More information about the use of visual cues by G. bursarius and the influence of experience on individual search mode is needed for refining current models of foraging behavior in this species.
A review of the evolution of animal colour vision and visual communication signals.
Osorio, D; Vorobyev, M
2008-09-01
The visual displays of animals and plants are often colourful, and colour vision allows animals to respond to these signals as they forage for food, choose mates and so-forth. This article discusses the evolutionary relationship between photoreceptor spectral sensitivities of four groups of land animals--birds, butterflies, primates and hymenopteran insects (bees and wasps)--, the colour signals that are relevant to them, and how understanding is informed by models of spectral coding and colour vision. Although the spectral sensitivities of photoreceptors are known to vary adaptively under natural selection there is little evidence that those of hymenopterans, birds and primates are specifically adapted to the reflectance spectra of food plants or animal visual signals. On the other hand, the colours of fruit, flowers and feathers may have evolved to be more discriminable for the colour vision of their natural receivers than for other groups of animals. Butterflies are unusual in that they have enjoyed a major radiation in receptor numbers and spectral sensitivities. The reasons for the radiation and diversity of butterfly colour vision remain unknown, but may include their need to find food plants and to select mates.
Chiba, T; Ohi, R
1998-01-01
Short-gut syndrome is likely to impair enteric fat utilization. This study was undertaken to develop a clinical test of lipid absorption without fecal collection. The absorption of enterally fed radioactive long-chain fatty acid, beta-methyl-p-(123I)-iodophenylpentadecanoic acid was investigated with continuous chyle collection in rats. The changes in excretion and time-dependent biodistribution of radioactivity of the enterally fed agent were assessed in normal control animals. Similarly, sequential urinary excretion and biodistribution were studied along with scintigraphy using sham-operated and short-gut animals. Approximately 64% of the enterally fed radioactivity was recovered in the collected chyle (24 hours). A comparison of normal control, sham-operated, and short-gut animals showed significantly less urinary and greater fecal excretions of radioactivity in short-gut animals. With the use of sequential scintigraphy, the small intestine, whole-body soft tissues, and urinary bladder were well visualized in sham-operated animals, whereas the large intestine and feces were demonstrated earlier in short-gut animals. Our results suggest that enteral feeding of the agent might be feasible for determining lipid absorption from the the dynamic changes of radioactivity in visualized abdominal organs and in urine.
Applied estimation for hybrid dynamical systems using perceptional information
NASA Astrophysics Data System (ADS)
Plotnik, Aaron M.
This dissertation uses the motivating example of robotic tracking of mobile deep ocean animals to present innovations in robotic perception and estimation for hybrid dynamical systems. An approach to estimation for hybrid systems is presented that utilizes uncertain perceptional information about the system's mode to improve tracking of its mode and continuous states. This results in significant improvements in situations where previously reported methods of estimation for hybrid systems perform poorly due to poor distinguishability of the modes. The specific application that motivates this research is an automatic underwater robotic observation system that follows and films individual deep ocean animals. A first version of such a system has been developed jointly by the Stanford Aerospace Robotics Laboratory and Monterey Bay Aquarium Research Institute (MBARI). This robotic observation system is successfully fielded on MBARI's ROVs, but agile specimens often evade the system. When a human ROV pilot performs this task, one advantage that he has over the robotic observation system in these situations is the ability to use visual perceptional information about the target, immediately recognizing any changes in the specimen's behavior mode. With the approach of the human pilot in mind, a new version of the robotic observation system is proposed which is extended to (a) derive perceptional information (visual cues) about the behavior mode of the tracked specimen, and (b) merge this dissimilar, discrete and uncertain information with more traditional continuous noisy sensor data by extending existing algorithms for hybrid estimation. These performance enhancements are enabled by integrating techniques in hybrid estimation, computer vision and machine learning. First, real-time computer vision and classification algorithms extract a visual observation of the target's behavior mode. Existing hybrid estimation algorithms are extended to admit this uncertain but discrete observation, complementing the information available from more traditional sensors. State tracking is achieved using a new form of Rao-Blackwellized particle filter called the mode-observed Gaussian Particle Filter. Performance is demonstrated using data from simulation and data collected on actual specimens in the ocean. The framework for estimation using both traditional and perceptional information is easily extensible to other stochastic hybrid systems with mode-related perceptional observations available.
Tracking animals in freshwater with electronic tags: past, present and future
Cooke, Steven J.; Midwood, Jonathan D.; Thiem, Jason D.; Klimley, Peter; Lucas, Martyn C.; Thorstad, Eva B.; Eiler, John; Holbrook, Chris; Ebner, Brendan C.
2013-01-01
Considerable technical developments over the past half century have enabled widespread application of electronic tags to the study of animals in the wild, including in freshwater environments. We review the constraints associated with freshwater telemetry and biologging and the technical developments relevant to their use. Technical constraints for tracking animals are often influenced by the characteristics of the animals being studied and the environment they inhabit. Collectively, they influence which and how technologies can be used and their relative effectiveness. Although radio telemetry has historically been the most commonly used technology in freshwater, passive integrated transponder (PIT) technology, acoustic telemetry and biologgers are becoming more popular. Most telemetry studies have focused on fish, although an increasing number have focused on other taxa, such as turtles, crustaceans and molluscs. Key technical developments for freshwater systems include: miniaturization of tags for tracking small-size life stages and species, fixed stations and coded tags for tracking large samples of animals over long distances and large temporal scales, inexpensive PIT systems that enable mass tagging to yield population- and community-level relevant sample sizes, incorporation of sensors into electronic tags, validation of tag attachment procedures with a focus on maintaining animal welfare, incorporation of different techniques (for example, genetics, stable isotopes) and peripheral technologies (for example, geographic information systems, hydroacoustics), development of novel analytical techniques, and extensive international collaboration. Innovations are still needed in tag miniaturization, data analysis and visualization, and in tracking animals over larger spatial scales (for example, pelagic areas of lakes) and in challenging environments (for example, large dynamic floodplain systems, under ice). There seems to be a particular need for adapting various global positioning system and satellite tagging approaches to freshwater. Electronic tagging provides a mechanism to collect detailed information from imperilled animals and species that have no direct economic value. Current and future advances will continue to improve our knowledge of the natural history of aquatic animals and ecological processes in freshwater ecosystems while facilitating evidence-based resource management and conservation.
Nitzsche, Björn; Lobsien, Donald; Seeger, Johannes; Schneider, Holm; Boltze, Johannes
2014-01-01
Cerebrovascular diseases are significant causes of death and disability in humans. Improvements in diagnostic and therapeutic approaches strongly rely on adequate gyrencephalic, large animal models being demanded for translational research. Ovine stroke models may represent a promising approach but are currently limited by insufficient knowledge regarding the venous system of the cerebral angioarchitecture. The present study was intended to provide a comprehensive anatomical analysis of the intracranial venous system in sheep as a reliable basis for the interpretation of experimental results in such ovine models. We used corrosion casts as well as contrast-enhanced magnetic resonance venography to scrutinize blood drainage from the brain. This combined approach yielded detailed and, to some extent, novel findings. In particular, we provide evidence for chordae Willisii and lateral venous lacunae, and report on connections between the dorsal and ventral sinuses in this species. For the first time, we also describe venous confluences in the deep cerebral venous system and an ‘anterior condylar confluent’ as seen in humans. This report provides a detailed reference for the interpretation of venous diagnostic imaging findings in sheep, including an assessment of structure detectability by in vivo (imaging) versus ex vivo (corrosion cast) visualization methods. Moreover, it features a comprehensive interspecies-comparison of the venous cerebral angioarchitecture in man, rodents, canines and sheep as a relevant large animal model species, and describes possible implications for translational cerebrovascular research. PMID:24736654
Ranganathan, Kavitha; Hong, Xiaowei; Cholok, David; Habbouche, Joe; Priest, Caitlin; Breuler, Christopher; Chung, Michael; Li, John; Kaura, Arminder; Hsieh, Hsiao Hsin Sung; Butts, Jonathan; Ucer, Serra; Schwartz, Ean; Buchman, Steven R; Stegemann, Jan P; Deng, Cheri X; Levi, Benjamin
2018-04-01
Early treatment of heterotopic ossification (HO) is currently limited by delayed diagnosis due to limited visualization at early time points. In this study, we validate the use of spectral ultrasound imaging (SUSI) in an animal model to detect HO as early as one week after burn tenotomy. Concurrent SUSI, micro CT, and histology at 1, 2, 4, and 9weeks post-injury were used to follow the progression of HO after an Achilles tenotomy and 30% total body surface area burn (n=3-5 limbs per time point). To compare the use of SUSI in different types of injury models, mice (n=5 per group) underwent either burn/tenotomy or skin incision injury and were imaged using a 55MHz probe on VisualSonics VEVO 770 system at one week post injury to evaluate the ability of SUSI to distinguish between edema and HO. Average acoustic concentration (AAC) and average scatterer diameter (ASD) were calculated for each ultrasound image frame. Micro CT was used to calculate the total volume of HO. Histology was used to confirm bone formation. Using SUSI, HO was visualized as early as 1week after injury. HO was visualized earliest by 4weeks after injury by micro CT. The average acoustic concentration of HO was 33% more than that of the control limb (n=5). Spectroscopic foci of HO present at 1week that persisted throughout all time points correlated with the HO present at 9weeks on micro CT imaging. SUSI visualizes HO as early as one week after injury in an animal model. SUSI represents a new imaging modality with promise for early diagnosis of HO. Copyright © 2018 Elsevier Inc. All rights reserved.
PLANETarium - Visualizing Earth Sciences in the Planetarium
NASA Astrophysics Data System (ADS)
Ballmer, M. D.; Wiethoff, T.; Kraupe, T. W.
2013-12-01
In the past decade, projection systems in most planetariums, traditional sites of outreach and public education, have advanced from instruments that can visualize the motion of stars as beam spots moving over spherical projection areas to systems that are able to display multicolor, high-resolution, immersive full-dome videos or images. These extraordinary capabilities are ideally suited for visualization of global processes occurring on the surface and within the interior of the Earth, a spherical body just as the full dome. So far, however, our community has largely ignored this wonderful interface for outreach and education. A few documentaries on e.g. climate change or volcanic eruptions have been brought to planetariums, but are taking little advantage of the true potential of the medium, as mostly based on standard two-dimensional videos and cartoon-style animations. Along these lines, we here propose a framework to convey recent scientific results on the origin and evolution of our PLANET to the >100,000,000 per-year worldwide audience of planetariums, making the traditionally astronomy-focussed interface a true PLANETarium. In order to do this most efficiently, we intend to directly show visualizations of scientific datasets or models, originally designed for basic research. Such visualizations in solid-Earth, as well as athmospheric and ocean sciences, are expected to be renderable to the dome with little or no effort. For example, showing global geophysical datasets (e.g., surface temperature, gravity, magnetic field), or horizontal slices of seismic-tomography images and of spherical computer simulations (e.g., climate evolution, mantle flow or ocean currents) requires almost no rendering at all. Three-dimensional Cartesian datasets or models can be rendered using standard methods. With the appropriate audio support, present-day science visualizations are typically as intuitive as cartoon-style animations, yet more appealing visually, and clearly more informative as revealing the complexity and beauty of our planet. In addition to e.g. climate change and natural hazards, themes of interest may include the coupled evolution of the Earth's interior and life, from the accretion of our planet to the generation and sustainment of the magnetic field as well as of habitable conditions in the atmosphere and oceans. We believe that high-quality tax-funded science visualizations should not exclusively be used to facilitate communication amoung scientists, but also be directly recycled to raise the public's awareness and appreciation of geosciences.
Live Interrogation and Visualization of Earth Systems (LIVES)
NASA Astrophysics Data System (ADS)
Nunn, J. A.; Anderson, L. C.
2007-12-01
Twenty tablet PCs and associated peripherals acquired through a HP Technology for Teaching grant are being used to redesign two freshman laboratory courses as well as a sophomore geobiology course in Geology and Geophysics at Louisiana State University. The two introductory laboratories serve approximately 750 students per academic year including both majors and non-majors; the geobiology course enrolls about 35 students/year and is required for majors in the department's geology concentration. Limited enrollments and 3 hour labs make it possible to incorporate hands-on visualization, animation, GIS, manipulation of data and images, and access to geological data available online. Goals of the course redesigns include: enhancing visualization of earth materials, physical/chemical/biological processes, and biosphere/geosphere history; strengthening student's ability to acquire, manage, and interpret multifaceted geological information; fostering critical thinking, the scientific method, and earth-system science/perspective in ancient and modern environments (such as coastal erosion and restoration in Louisiana or the Snowball Earth hypothesis); improving student communication skills; and increasing the quantity, quality, and diversity of students pursuing Earth Science careers. IT resources available in the laboratory provide students with sophisticated visualization tools, allowing them to switch between 2-D and 3-D reconstructions more seamlessly, and enabling them to manipulate larger integrated data- sets, thus permitting more time for critical thinking and hypothesis testing. IT resources also enable faculty and students to simultaneously work with simulation software to animate earth processes such as plate motions or groundwater flow and immediately test hypothesis formulated in the data analysis. Finally, tablet PCs make it possible for data gathering and analysis outside a formal classroom. As a result, students will achieve fluency in using visualization and technology for informal and formal scientific communication. The equipment and exercises developed also will be used in additional upper level undergraduate classes and two outreach programs: NSF funded Geoscience Alliance for Enhanced Minority Participation and Shell Foundation funded Shell Undergraduate Recruiting and Geoscience Education.
ERIC Educational Resources Information Center
Lin, Huifen; Chen, Tsuiping; Dwyer, Francis M.
2006-01-01
The purpose of this experimental study was to compare the effects of using static visuals versus computer-generated animation to enhance learners' comprehension and retention of a content-based lesson in a computer-based learning environment for learning English as a foreign language (EFL). Fifty-eight students from two EFL reading sections were…
Learning from Instructional Animations: How Does Prior Knowledge Mediate the Effect of Visual Cues?
ERIC Educational Resources Information Center
Arslan-Ari, I.
2018-01-01
The purpose of this study was to investigate the effects of cueing and prior knowledge on learning and mental effort of students studying an animation with narration. This study employed a 2 (no cueing vs. visual cueing) × 2 (low vs. high prior knowledge) between-subjects factorial design. The results revealed a significant interaction effect…
Visualizing protein interactions and dynamics: evolving a visual language for molecular animation.
Jenkinson, Jodie; McGill, Gaël
2012-01-01
Undergraduate biology education provides students with a number of learning challenges. Subject areas that are particularly difficult to understand include protein conformational change and stability, diffusion and random molecular motion, and molecular crowding. In this study, we examined the relative effectiveness of three-dimensional visualization techniques for learning about protein conformation and molecular motion in association with a ligand-receptor binding event. Increasingly complex versions of the same binding event were depicted in each of four animated treatments. Students (n = 131) were recruited from the undergraduate biology program at University of Toronto, Mississauga. Visualization media were developed in the Center for Molecular and Cellular Dynamics at Harvard Medical School. Stem cell factor ligand and cKit receptor tyrosine kinase were used as a classical example of a ligand-induced receptor dimerization and activation event. Each group completed a pretest, viewed one of four variants of the animation, and completed a posttest and, at 2 wk following the assessment, a delayed posttest. Overall, the most complex animation was the most effective at fostering students' understanding of the events depicted. These results suggest that, in select learning contexts, increasingly complex representations may be more desirable for conveying the dynamic nature of cell binding events.
ERIC Educational Resources Information Center
Huk, Thomas; Steinke, Mattias; Floto, Christian
2010-01-01
Within the framework of cognitive learning theories, instructional design manipulations have primarily been investigated under tightly controlled laboratory conditions. We carried out two experiments, where the first experiment was conducted in a restricted system-paced setting and is therefore in line with the majority of empirical studies in the…
Fornwall, M.; Gisiner, R.; Simmons, S. E.; Moustahfid, Hassan; Canonico, G.; Halpin, P.; Goldstein, P.; Fitch, R.; Weise, M.; Cyr, N.; Palka, D.; Price, J.; Collins, D.
2012-01-01
The US Integrated Ocean Observing System (IOOS) has recently adopted standards for biological core variables in collaboration with the US Geological Survey/Ocean Biogeographic Information System (USGS/OBIS-USA) and other federal and non-federal partners. In this Community White Paper (CWP) we provide a process to bring into IOOS a rich new source of biological observing data, visual line transect surveys, and to establish quality data standards for visual line transect observations, an important source of at-sea bird, turtle and marine mammal observation data. The processes developed through this exercise will be useful for other similar biogeographic observing efforts, such as passive acoustic point and line transect observations, tagged animal data, and mark-recapture (photo-identification) methods. Furthermore, we suggest that the processes developed through this exercise will serve as a catalyst for broadening involvement by the larger marine biological data community within the goals and processes of IOOS.
Predator confusion is sufficient to evolve swarming behaviour
Olson, Randal S.; Hintze, Arend; Dyer, Fred C.; Knoester, David B.; Adami, Christoph
2013-01-01
Swarming behaviours in animals have been extensively studied owing to their implications for the evolution of cooperation, social cognition and predator–prey dynamics. An important goal of these studies is discerning which evolutionary pressures favour the formation of swarms. One hypothesis is that swarms arise because the presence of multiple moving prey in swarms causes confusion for attacking predators, but it remains unclear how important this selective force is. Using an evolutionary model of a predator–prey system, we show that predator confusion provides a sufficient selection pressure to evolve swarming behaviour in prey. Furthermore, we demonstrate that the evolutionary effect of predator confusion on prey could in turn exert pressure on the structure of the predator's visual field, favouring the frontally oriented, high-resolution visual systems commonly observed in predators that feed on swarming animals. Finally, we provide evidence that when prey evolve swarming in response to predator confusion, there is a change in the shape of the functional response curve describing the predator's consumption rate as prey density increases. Thus, we show that a relatively simple perceptual constraint—predator confusion—could have pervasive evolutionary effects on prey behaviour, predator sensory mechanisms and the ecological interactions between predators and prey. PMID:23740485
Predator confusion is sufficient to evolve swarming behaviour.
Olson, Randal S; Hintze, Arend; Dyer, Fred C; Knoester, David B; Adami, Christoph
2013-08-06
Swarming behaviours in animals have been extensively studied owing to their implications for the evolution of cooperation, social cognition and predator-prey dynamics. An important goal of these studies is discerning which evolutionary pressures favour the formation of swarms. One hypothesis is that swarms arise because the presence of multiple moving prey in swarms causes confusion for attacking predators, but it remains unclear how important this selective force is. Using an evolutionary model of a predator-prey system, we show that predator confusion provides a sufficient selection pressure to evolve swarming behaviour in prey. Furthermore, we demonstrate that the evolutionary effect of predator confusion on prey could in turn exert pressure on the structure of the predator's visual field, favouring the frontally oriented, high-resolution visual systems commonly observed in predators that feed on swarming animals. Finally, we provide evidence that when prey evolve swarming in response to predator confusion, there is a change in the shape of the functional response curve describing the predator's consumption rate as prey density increases. Thus, we show that a relatively simple perceptual constraint--predator confusion--could have pervasive evolutionary effects on prey behaviour, predator sensory mechanisms and the ecological interactions between predators and prey.
Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.
ERIC Educational Resources Information Center
Halas, John
Visual scripting is the coordination of words with pictures in sequence. This book presents the methods and viewpoints on visual scripting of fourteen film makers, from nine countries, who are involved in animated cinema; it contains concise examples of how a storybook and preproduction script can be prepared in visual terms; and it includes a…
NASA Astrophysics Data System (ADS)
McDougall, C.; McLaughlin, J.
2008-12-01
NOAA has developed several programs aimed at facilitating the use of earth system science data and data visualizations by formal and informal educators. One of them, Science On a Sphere, a visualization display tool and system that uses networked LCD projectors to display animated global datasets onto the outside of a suspended, 1.7-meter diameter opaque sphere, enables science centers, museums, and universities to display real-time and current earth system science data. NOAA's Office of Education has provided grants to such education institutions to develop exhibits featuring Science On a Sphere (SOS) and create content for and evaluate audience impact. Currently, 20 public education institutions have permanent Science On a Sphere exhibits and 6 more will be installed soon. These institutions and others that are working to create and evaluate content for this system work collaboratively as a network to improve our collective knowledge about how to create educationally effective visualizations. Network members include other federal agencies, such as, NASA and the Dept. of Energy, and major museums such as Smithsonian and American Museum of Natural History, as well as a variety of mid-sized and small museums and universities. Although the audiences in these institutions vary widely in their scientific awareness and understanding, we find there are misconceptions and lack of familiarity with viewing visualizations that are common among the audiences. Through evaluations performed in these institutions we continue to evolve our understanding of how to create content that is understandable by those with minimal scientific literacy. The findings from our network will be presented including the importance of providing context, real-world connections and imagery to accompany the visualizations and the need for audience orientation before the visualizations are viewed. Additionally, we will review the publicly accessible virtual library housing over 200 datasets for SOS and any other real or virtual globe. These datasets represent contributions from NOAA, NASA, Dept. of Energy, and the public institutions that are displaying the spheres.
Evolution of colour vision in mammals.
Jacobs, Gerald H
2009-10-12
Colour vision allows animals to reliably distinguish differences in the distributions of spectral energies reaching the eye. Although not universal, a capacity for colour vision is sufficiently widespread across the animal kingdom to provide prima facie evidence of its importance as a tool for analysing and interpreting the visual environment. The basic biological mechanisms on which vertebrate colour vision ultimately rests, the cone opsin genes and the photopigments they specify, are highly conserved. Within that constraint, however, the utilization of these basic elements varies in striking ways in that they appear, disappear and emerge in altered form during the course of evolution. These changes, along with other alterations in the visual system, have led to profound variations in the nature and salience of colour vision among the vertebrates. This article concerns the evolution of colour vision among the mammals, viewing that process in the context of relevant biological mechanisms, of variations in mammalian colour vision, and of the utility of colour vision.
Evolution of colour vision in mammals
Jacobs, Gerald H.
2009-01-01
Colour vision allows animals to reliably distinguish differences in the distributions of spectral energies reaching the eye. Although not universal, a capacity for colour vision is sufficiently widespread across the animal kingdom to provide prima facie evidence of its importance as a tool for analysing and interpreting the visual environment. The basic biological mechanisms on which vertebrate colour vision ultimately rests, the cone opsin genes and the photopigments they specify, are highly conserved. Within that constraint, however, the utilization of these basic elements varies in striking ways in that they appear, disappear and emerge in altered form during the course of evolution. These changes, along with other alterations in the visual system, have led to profound variations in the nature and salience of colour vision among the vertebrates. This article concerns the evolution of colour vision among the mammals, viewing that process in the context of relevant biological mechanisms, of variations in mammalian colour vision, and of the utility of colour vision. PMID:19720656
The case for visual analytics of arsenic concentrations in foods.
Johnson, Matilda O; Cohly, Hari H P; Isokpehi, Raphael D; Awofolu, Omotayo R
2010-05-01
Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species.
Active training for amblyopia in adult rodents
Sale, Alessandro; Berardi, Nicoletta
2015-01-01
Amblyopia is the most diffused form of visual function impairment affecting one eye, with a prevalence of 1–5% in the total world population. Amblyopia is usually caused by an early functional imbalance between the two eyes, deriving from anisometropia, strabismus, or congenital cataract, leading to severe deficits in visual acuity, contrast sensitivity and stereopsis. While amblyopia can be efficiently treated in children, it becomes irreversible in adults, as a result of a dramatic decline in visual cortex plasticity which occurs at the end of the critical period (CP) in the primary visual cortex. Notwithstanding this widely accepted dogma, recent evidence in animal models and in human patients have started to challenge this view, revealing a previously unsuspected possibility to enhance plasticity in the adult visual system and to achieve substantial visual function recovery. Among the new proposed intervention strategies, non invasive procedures based on environmental enrichment, physical exercise or visual perceptual learning (vPL) appear particularly promising in terms of future applicability in the clinical setting. In this survey, we will review recent literature concerning the application of these behavioral intervention strategies to the treatment of amblyopia, with a focus on possible underlying molecular and cellular mechanisms. PMID:26578911
Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.
Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M
2010-01-01
Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.
The Case for Visual Analytics of Arsenic Concentrations in Foods
Johnson, Matilda O.; Cohly, Hari H.P.; Isokpehi, Raphael D.; Awofolu, Omotayo R.
2010-01-01
Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species. PMID:20623005
Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans
2017-03-20
From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dangerous animals capture and maintain attention in humans.
Yorzinski, Jessica L; Penkunas, Michael J; Platt, Michael L; Coss, Richard G
2014-05-28
Predation is a major source of natural selection on primates and may have shaped attentional processes that allow primates to rapidly detect dangerous animals. Because ancestral humans were subjected to predation, a process that continues at very low frequencies, we examined the visual processes by which men and women detect dangerous animals (snakes and lions). We recorded the eye movements of participants as they detected images of a dangerous animal (target) among arrays of nondangerous animals (distractors) as well as detected images of a nondangerous animal (target) among arrays of dangerous animals (distractors). We found that participants were quicker to locate targets when the targets were dangerous animals compared with nondangerous animals, even when spatial frequency and luminance were controlled. The participants were slower to locate nondangerous targets because they spent more time looking at dangerous distractors, a process known as delayed disengagement, and looked at a larger number of dangerous distractors. These results indicate that dangerous animals capture and maintain attention in humans, suggesting that historical predation has shaped some facets of visual orienting and its underlying neural architecture in modern humans.
Ashtari, Manzar; Zhang, Hui; Cook, Philip A; Cyckowski, Laura L; Shindler, Kenneth S; Marshall, Kathleen A; Aravand, Puya; Vossough, Arastoo; Gee, James C; Maguire, Albert M; Baker, Chris I; Bennett, Jean
2015-07-15
Much of our knowledge of the mechanisms underlying plasticity in the visual cortex in response to visual impairment, vision restoration, and environmental interactions comes from animal studies. We evaluated human brain plasticity in a group of patients with Leber's congenital amaurosis (LCA), who regained vision through gene therapy. Using non-invasive multimodal neuroimaging methods, we demonstrated that reversing blindness with gene therapy promoted long-term structural plasticity in the visual pathways emanating from the treated retina of LCA patients. The data revealed improvements and normalization along the visual fibers corresponding to the site of retinal injection of the gene therapy vector carrying the therapeutic gene in the treated eye compared to the visual pathway for the untreated eye of LCA patients. After gene therapy, the primary visual pathways (for example, geniculostriate fibers) in the treated retina were similar to those of sighted control subjects, whereas the primary visual pathways of the untreated retina continued to deteriorate. Our results suggest that visual experience, enhanced by gene therapy, may be responsible for the reorganization and maturation of synaptic connectivity in the visual pathways of the treated eye in LCA patients. The interactions between the eye and the brain enabled improved and sustained long-term visual function in patients with LCA after gene therapy. Copyright © 2015, American Association for the Advancement of Science.
Mechanisms of Photoreceptor Patterning in Vertebrates and Invertebrates.
Viets, Kayla; Eldred, Kiara; Johnston, Robert J
2016-10-01
Across the animal kingdom, visual systems have evolved to be uniquely suited to the environments and behavioral patterns of different species. Visual acuity and color perception depend on the distribution of photoreceptor (PR) subtypes within the retina. Retinal mosaics can be organized into three broad categories: stochastic/regionalized, regionalized, and ordered. We describe here the retinal mosaics of flies, zebrafish, chickens, mice, and humans, and the gene regulatory networks controlling proper PR specification in each. By drawing parallels in eye development between these divergent species, we identify a set of conserved organizing principles and transcriptional networks that govern PR subtype differentiation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mechanisms of photoreceptor patterning in vertebrates and invertebrates
Johnston, Robert J
2016-01-01
Across the animal kingdom, visual systems have evolved to be uniquely suited to the environments and behavioral patterns of different species. The visual acuity and color perception of organisms depend on the distribution of photoreceptor subtypes within the retina. Retinal mosaics can be organized into three broad categories: stochastic/regionalized, regionalized, and ordered. Here, we describe the retinal mosaics of flies, zebrafish, chickens, mice, and humans and the gene regulatory networks controlling proper photoreceptor specification in each. By drawing parallels in eye development between these divergent species, we identify a set of conserved organizing principles and transcriptional networks that govern photoreceptor subtype differentiation. PMID:27615122
Disruption of visual circuit formation and refinement in a mouse model of autism
Khanbabaei, Maryam; Murari, Kartikeya; Rho, Jong M.
2016-01-01
Aberrant connectivity is believed to contribute to the pathophysiology of autism spectrum disorder (ASD). Recent neuroimaging studies have increasingly identified such impairments in patients with ASD, including alterations in sensory systems. However, the cellular substrates and molecular underpinnings of disrupted connectivity remain poorly understood. Utilizing eye‐specific segregation in the dorsal lateral geniculate nucleus (dLGN) as a model system, we investigated the formation and refinement of precise patterning of synaptic connections in the BTBR T + tf/J (BTBR) mouse model of ASD. We found that at the neonatal stage, the shape of the dLGN occupied by retinal afferents was altered in the BTBR group compared to C57BL/6J (B6) animals. Notably, the degree of overlap between the ipsi‐ and contralateral afferents was significantly greater in the BTBR mice. Moreover, these abnormalities continued into mature stage in the BTBR animals, suggesting persistent deficits rather than delayed maturation of axonal refinement. Together, these results indicate disrupted connectivity at the synaptic patterning level in the BTBR mice, suggesting that in general, altered neural circuitry may contribute to autistic behaviours seen in this animal model. In addition, these data are consistent with the notion that lower‐level, primary processing mechanisms contribute to altered visual perception in ASD. Autism Res 2017, 10: 212–223. © 2016 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research. PMID:27529416
The Astronomy Workshop: Scientific Notation and Solar System Visualizer
NASA Astrophysics Data System (ADS)
Deming, Grace; Hamilton, D.; Hayes-Gehrke, M.
2008-09-01
The Astronomy Workshop (http://janus.astro.umd.edu) is a collection of interactive World Wide Web tools that were developed under the direction of Doug Hamilton for use in undergraduate classes and by the general public. The philosophy of the site is to foster student interest in astronomy by exploiting their fascination with computers and the internet. We have expanded the "Scientific Notation” tool from simply converting decimal numbers into and out of scientific notation to adding, subtracting, multiplying, and dividing numbers expressed in scientific notation. Students practice these skills and when confident they may complete a quiz. In addition, there are suggestions on how instructors may use the site to encourage students to practice these basic skills. The Solar System Visualizer animates orbits of planets, moons, and rings to scale. Extrasolar planetary systems are also featured. This research was sponsored by NASA EPO grant NNG06GGF99G.
Camouflage predicts survival in ground-nesting birds
Troscianko, Jolyon; Wilson-Aggarwal, Jared; Stevens, Martin; Spottiswoode, Claire N.
2016-01-01
Evading detection by predators is crucial for survival. Camouflage is therefore a widespread adaptation, but despite substantial research effort our understanding of different camouflage strategies has relied predominantly on artificial systems and on experiments disregarding how camouflage is perceived by predators. Here we show for the first time in a natural system, that survival probability of wild animals is directly related to their level of camouflage as perceived by the visual systems of their main predators. Ground-nesting plovers and coursers flee as threats approach, and their clutches were more likely to survive when their egg contrast matched their surrounds. In nightjars – which remain motionless as threats approach – clutch survival depended on plumage pattern matching between the incubating bird and its surrounds. Our findings highlight the importance of pattern and luminance based camouflage properties, and the effectiveness of modern techniques in capturing the adaptive properties of visual phenotypes. PMID:26822039
Camouflage predicts survival in ground-nesting birds.
Troscianko, Jolyon; Wilson-Aggarwal, Jared; Stevens, Martin; Spottiswoode, Claire N
2016-01-29
Evading detection by predators is crucial for survival. Camouflage is therefore a widespread adaptation, but despite substantial research effort our understanding of different camouflage strategies has relied predominantly on artificial systems and on experiments disregarding how camouflage is perceived by predators. Here we show for the first time in a natural system, that survival probability of wild animals is directly related to their level of camouflage as perceived by the visual systems of their main predators. Ground-nesting plovers and coursers flee as threats approach, and their clutches were more likely to survive when their egg contrast matched their surrounds. In nightjars - which remain motionless as threats approach - clutch survival depended on plumage pattern matching between the incubating bird and its surrounds. Our findings highlight the importance of pattern and luminance based camouflage properties, and the effectiveness of modern techniques in capturing the adaptive properties of visual phenotypes.
An Immersive VR System for Sports Education
NASA Astrophysics Data System (ADS)
Song, Peng; Xu, Shuhong; Fong, Wee Teck; Chin, Ching Ling; Chua, Gim Guan; Huang, Zhiyong
The development of new technologies has undoubtedly promoted the advances of modern education, among which Virtual Reality (VR) technologies have made the education more visually accessible for students. However, classroom education has been the focus of VR applications whereas not much research has been done in promoting sports education using VR technologies. In this paper, an immersive VR system is designed and implemented to create a more intuitive and visual way of teaching tennis. A scalable system architecture is proposed in addition to the hardware setup layout, which can be used for various immersive interactive applications such as architecture walkthroughs, military training simulations, other sports game simulations, interactive theaters, and telepresent exhibitions. Realistic interaction experience is achieved through accurate and robust hybrid tracking technology, while the virtual human opponent is animated in real time using shader-based skin deformation. Potential future extensions are also discussed to improve the teaching/learning experience.
Novel Visualization Approaches in Environmental Mineralogy
NASA Astrophysics Data System (ADS)
Anderson, C. D.; Lopano, C. L.; Hummer, D. R.; Heaney, P. J.; Post, J. E.; Kubicki, J. D.; Sofo, J. O.
2006-05-01
Communicating the complexities of atomic scale reactions between minerals and fluids is fraught with intrinsic challenges. For example, an increasing number of techniques are now available for the interrogation of dynamical processes at the mineral-fluid interface. However, the time-dependent behavior of atomic interactions between a solid and a liquid is often not adequately captured by two-dimensional line drawings or images. At the same time, the necessity for describing these reactions to general audiences is growing more urgent, as funding agencies are amplifying their encouragement to scientists to reach across disciplines and to justify their studies to public audiences. To overcome the shortcomings of traditional graphical representations, the Center for Environmental Kinetics Analysis is creating three-dimensional visualizations of experimental and simulated mineral reactions. These visualizations are then displayed on a stereo 3D projection system called the GeoWall. Made possible (and affordable) by recent improvements in computer and data projector technology, the GeoWall system uses a combination of computer software and hardware, polarizing filters and polarizing glasses, to present visualizations in true 3D. The three-dimensional views greatly improve comprehension of complex multidimensional data, and animations of time series foster better understanding of the underlying processes. The visualizations also offer an effective means to communicate the complexities of environmental mineralogy to colleagues, students and the public. Here we present three different kinds of datasets that demonstrate the effectiveness of the GeoWall in clarifying complex environmental reactions at the atomic scale. First, a time-resolved series of diffraction patterns obtained during the hydrothermal synthesis of metal oxide phases from precursor solutions can be viewed as a surface with interactive controls for peak scaling and color mapping. Second, the results of Rietveld analysis of cation exchange reactions in Mn oxides has provided three-dimensional difference Fourier maps. When stitched together in a temporal series, these offer an animated view of changes in atomic configurations during the process of exchange. Finally, molecular dynamical simulations are visualized as three-dimensional reactions between vibrating atoms in both the solid and the aqueous phases.
Visual images in Luigi Galvani's path to animal electricity.
Piccolino, Marco
2008-01-01
The scientific endeavor that led Luigi Galvani to his hypothesis of "animal electricity," i.e., of an electricity present in a condition of disequilibrium between the interior and the exterior of excitable animal fibers, is reviewed here with particular emphasis to the role played by visual images in Galvani's path of discovery. In 1791 Galvani formulated his model of neuromuscular physiology on the base of the image of a muscle and a nerve fiber together as in a "minute animal Leyden jar." This was the last instance of a series of physical models that accompanied Galvani's experimental efforts in the search of a theory capable of accounting for the electric nature of nerve conduction in spite of the many objections formulated in the eighteenth century against a possible role of electricity in animal physiology.
Comprehending emergent systems phenomena through direct-manipulation animation
NASA Astrophysics Data System (ADS)
Aguirre, Priscilla Abel
This study seeks to understand the type of interaction mode that best supports learning and comprehension of emergent systems phenomena. Given that the literature has established that students hold robust misconceptions of such phenomena, this study investigates the influence of using three types of interaction; speed-manipulation animation (SMN), post-manipulation animation (PMA) and direct-manipulation animation (DMA) for increasing comprehension and testing transfer of the phenomena, by looking at the effect of simultaneous interaction of haptic and visual channels on long term and working memories when seeking to comprehend emergent phenomena. The questions asked were: (1) Does the teaching of emergent phenomena, with the aid of a dynamic interactive modeling tool (i.e., SMA, PMA or DMA), improve students' mental model construction of systems, thus increasing comprehension of this scientific concept? And (2) does the teaching of emergent phenomena, with the aid of a dynamic interactive modeling tool, give the students the necessary complex cognitive skill which can then be applied to similar (near transfer) and/or novel, but different, (far transfer) scenarios? In an empirical study undergraduate and graduate students were asked to participate in one of three experimental conditions: SMA, PMA, or DMA. The results of the study found that it was the participants of the SMA treatment condition that had the most improvement in post-test scores. Students' understanding of the phenomena increased most when they used a dynamic model with few interactive elements (i.e., start, stop, and speed) that allowed for real time visualization of one's interaction on the phenomena. Furthermore, no indication was found that the learning of emergent phenomena, with the aid of a dynamic interactive modeling tool, gave the students the necessary complex cognitive skill which could then be applied to similar (near transfer) and/or novel, but different, (far transfer) scenarios. Finally, besides treatment condition, gender and age were also shown to be predictors of score differences; overall, males did better than females, and younger students did better than older students.
ERIC Educational Resources Information Center
Munyofu, Mine
2008-01-01
The purpose of this study was to examine the instructional effectiveness of different levels of chunking (simple visual/text and complex visual/text), different forms of feedback (item-by-item feedback, end-of-test feedback and no feedback), and use of instructional gaming (game and no game) in complementing animated programmed instruction on a…
Real-time phase-contrast x-ray imaging: a new technique for the study of animal form and function
Socha, John J; Westneat, Mark W; Harrison, Jon F; Waters, James S; Lee, Wah-Keat
2007-01-01
Background Despite advances in imaging techniques, real-time visualization of the structure and dynamics of tissues and organs inside small living animals has remained elusive. Recently, we have been using synchrotron x-rays to visualize the internal anatomy of millimeter-sized opaque, living animals. This technique takes advantage of partially-coherent x-rays and diffraction to enable clear visualization of internal soft tissue not viewable via conventional absorption radiography. However, because higher quality images require greater x-ray fluxes, there exists an inherent tradeoff between image quality and tissue damage. Results We evaluated the tradeoff between image quality and harm to the animal by determining the impact of targeted synchrotron x-rays on insect physiology, behavior and survival. Using 25 keV x-rays at a flux density of 80 μW/mm-2, high quality video-rate images can be obtained without major detrimental effects on the insects for multiple minutes, a duration sufficient for many physiological studies. At this setting, insects do not heat up. Additionally, we demonstrate the range of uses of synchrotron phase-contrast imaging by showing high-resolution images of internal anatomy and observations of labeled food movement during ingestion and digestion. Conclusion Synchrotron x-ray phase contrast imaging has the potential to revolutionize the study of physiology and internal biomechanics in small animals. This is the only generally applicable technique that has the necessary spatial and temporal resolutions, penetrating power, and sensitivity to soft tissue that is required to visualize the internal physiology of living animals on the scale from millimeters to microns. PMID:17331247
Hoffmann, Susanne; Vega-Zuniga, Tomas; Greiter, Wolfgang; Krabichler, Quirin; Bley, Alexandra; Matthes, Mariana; Zimmer, Christiane; Firzlaff, Uwe; Luksch, Harald
2016-11-01
The midbrain superior colliculus (SC) commonly features a retinotopic representation of visual space in its superficial layers, which is congruent with maps formed by multisensory neurons and motor neurons in its deep layers. Information flow between layers is suggested to enable the SC to mediate goal-directed orienting movements. While most mammals strongly rely on vision for orienting, some species such as echolocating bats have developed alternative strategies, which raises the question how sensory maps are organized in these animals. We probed the visual system of the echolocating bat Phyllostomus discolor and found that binocular high acuity vision is frontally oriented and thus aligned with the biosonar system, whereas monocular visual fields cover a large area of peripheral space. For the first time in echolocating bats, we could show that in contrast with other mammals, visual processing is restricted to the superficial layers of the SC. The topographic representation of visual space, however, followed the general mammalian pattern. In addition, we found a clear topographic representation of sound azimuth in the deeper collicular layers, which was congruent with the superficial visual space map and with a previously documented map of orienting movements. Especially for bats navigating at high speed in densely structured environments, it is vitally important to transfer and coordinate spatial information between sensors and motor systems. Here, we demonstrate first evidence for the existence of congruent maps of sensory space in the bat SC that might serve to generate a unified representation of the environment to guide motor actions. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
The Double Star Orbit Initial Value Problem
NASA Astrophysics Data System (ADS)
Hensley, Hagan
2018-04-01
Many precise algorithms exist to find a best-fit orbital solution for a double star system given a good enough initial value. Desmos is an online graphing calculator tool with extensive capabilities to support animations and defining functions. It can provide a useful visual means of analyzing double star data to arrive at a best guess approximation of the orbital solution. This is a necessary requirement before using a gradient-descent algorithm to find the best-fit orbital solution for a binary system.
Klop, D; Engelbrecht, L
2013-12-01
This study investigated whether a dynamic visual presentation method (a soundless animated video presentation) would elicit better narratives than a static visual presentation method (a wordless picture book). Twenty mainstream grade 3 children were randomly assigned to two groups and assessed with one of the visual presentation methods. Narrative performance was measured in terms of micro- and macrostructure variables. Microstructure variables included productivity (total number of words, total number of T-units), syntactic complexity (mean length of T-unit) and lexical diversity measures (number of different words). Macrostructure variables included episodic structure in terms of goal-attempt-outcome (GAO) sequences. Both visual presentation modalities elicited narratives of similar quantity and quality in terms of the micro- and macrostructure variables that were investigated. Animation of picture stimuli did not elicit better narratives than static picture stimuli.
Software Validation via Model Animation
NASA Technical Reports Server (NTRS)
Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.
2015-01-01
This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.
Effectiveness of Program Visualization: A Case Study with the ViLLE Tool
ERIC Educational Resources Information Center
Rajala, Teemu; Laakso, Mikko-Jussi; Kaila, Erkki; Salakoski, Tapio
2008-01-01
Program visualization is one of the various methods developed over the years to aid novices with their difficulties in learning to program. It consists of different graphical--often animated--and textual objects, visualizing the execution of programs. The aim of program visualization is to enhance students' understanding of different areas of…
Data-Driven Geospatial Visual Analytics for Real-Time Urban Flooding Decision Support
NASA Astrophysics Data System (ADS)
Liu, Y.; Hill, D.; Rodriguez, A.; Marini, L.; Kooper, R.; Myers, J.; Wu, X.; Minsker, B. S.
2009-12-01
Urban flooding is responsible for the loss of life and property as well as the release of pathogens and other pollutants into the environment. Previous studies have shown that spatial distribution of intense rainfall significantly impacts the triggering and behavior of urban flooding. However, no general purpose tools yet exist for deriving rainfall data and rendering them in real-time at the resolution of hydrologic units used for analyzing urban flooding. This paper presents a new visual analytics system that derives and renders rainfall data from the NEXRAD weather radar system at the sewershed (i.e. urban hydrologic unit) scale in real-time for a Chicago stormwater management project. We introduce a lightweight Web 2.0 approach which takes advantages of scientific workflow management and publishing capabilities developed at NCSA (National Center for Supercomputing Applications), streaming data-aware semantic content management repository, web-based Google Earth/Map and time-aware KML (Keyhole Markup Language). A collection of polygon-based virtual sensors is created from the NEXRAD Level II data using spatial, temporal and thematic transformations at the sewershed level in order to produce persistent virtual rainfall data sources for the animation. Animated color-coded rainfall map in the sewershed can be played in real-time as a movie using time-aware KML inside the web browser-based Google Earth for visually analyzing the spatiotemporal patterns of the rainfall intensity in the sewershed. Such system provides valuable information for situational awareness and improved decision support during extreme storm events in an urban area. Our further work includes incorporating additional data (such as basement flooding events data) or physics-based predictive models that can be used for more integrated data-driven decision support.
Hilgetag, C C; O'Neill, M A; Young, M P
2000-01-29
Neuroanatomists have described a large number of connections between the various structures of monkey and cat cortical sensory systems. Because of the complexity of the connection data, analysis is required to unravel what principles of organization they imply. To date, analysis of laminar origin and termination connection data to reveal hierarchical relationships between the cortical areas has been the most widely acknowledged approach. We programmed a network processor that searches for optimal hierarchical orderings of cortical areas given known hierarchical constraints and rules for their interpretation. For all cortical systems and all cost functions, the processor found a multitude of equally low-cost hierarchies. Laminar hierarchical constraints that are presently available in the anatomical literature were therefore insufficient to constrain a unique ordering for any of the sensory systems we analysed. Hierarchical orderings of the monkey visual system that have been widely reported, but which were derived by hand, were not among the optimal orderings. All the cortical systems we studied displayed a significant degree of hierarchical organization, and the anatomical constraints from the monkey visual and somato-motor systems were satisfied with very few constraint violations in the optimal hierarchies. The visual and somato-motor systems in that animal were therefore surprisingly strictly hierarchical. Most inconsistencies between the constraints and the hierarchical relationships in the optimal structures for the visual system were related to connections of area FST (fundus of superior temporal sulcus). We found that the hierarchical solutions could be further improved by assuming that FST consists of two areas, which differ in the nature of their projections. Indeed, we found that perfect hierarchical arrangements of the primate visual system, without any violation of anatomical constraints, could be obtained under two reasonable conditions, namely the subdivision of FST into two distinct areas, whose connectivity we predict, and the abolition of at least one of the less reliable rule constraints. Our analyses showed that the future collection of the same type of laminar constraints, or the inclusion of new hierarchical constraints from thalamocortical connections, will not resolve the problem of multiple optimal hierarchical representations for the primate visual system. Further data, however, may help to specify the relative ordering of some more areas. This indeterminacy of the visual hierarchy is in part due to the reported absence of some connections between cortical areas. These absences are consistent with limited cross-talk between differentiated processing streams in the system. Hence, hierarchical representation of the visual system is affected by, and must take into account, other organizational features, such as processing streams.
Cardiovascular function during sustained +G/z/ stress
NASA Technical Reports Server (NTRS)
Erickson, H. H.; Sandler, H.; Stone, H. L.
1976-01-01
The development of aerospace systems capable of very high levels of positive vertical accelerators stress has created a need for a better understanding of the cardiovascular responses to acceleration. Using a canine model, the heart and cardiovascular system were instrumented to continuously measure coronary blood flow, cardiac output, left ventricular and aortic root pressure, and oxygen saturation in the aorta, coronary sinus, and right ventricle. The animals were exposed to acceleration profiles up to +6 G, 120 s at peak G; a seatback angle of 45 deg was simulated in some experiments. Radiopaque contrast medium was injected to visualize the left ventricular chamber, coronary vasculature, aorta, and branches of the aorta. The results suggest mechanisms responsible for arrhythmias which may occur, and subendocardial hemorrhage which has been reported in other animals.
Visual orientation and navigation in nocturnal arthropods.
Warrant, Eric; Dacke, Marie
2010-01-01
With their highly sensitive visual systems, the arthropods have evolved a remarkable capacity to orient and navigate at night. Whereas some navigate under the open sky, and take full advantage of the celestial cues available there, others navigate in more difficult conditions, such as through the dense understory of a tropical rainforest. Four major classes of orientation are performed by arthropods at night, some of which involve true navigation (i.e. travel to a distant goal that lies beyond the range of direct sensory contact): (1) simple straight-line orientation, typically for escape purposes; (2) nightly short-distance movements relative to a shoreline, typically in the context of feeding; (3) long-distance nocturnal migration at high altitude in the quest to locate favorable feeding or breeding sites, and (4) nocturnal excursions to and from a fixed nest or food site (i.e. homing), a task that in most species involves path integration and/or the learning and recollection of visual landmarks. These four classes of orientation--and their visual basis--are reviewed here, with special emphasis given to the best-understood animal systems that are representative of each. 2010 S. Karger AG, Basel.
Schwarz, Sebastian; Albert, Laurence; Wystrach, Antoine; Cheng, Ken
2011-03-15
Many animal species, including some social hymenoptera, use the visual system for navigation. Although the insect compound eyes have been well studied, less is known about the second visual system in some insects, the ocelli. Here we demonstrate navigational functions of the ocelli in the visually guided Australian desert ant Melophorus bagoti. These ants are known to rely on both visual landmark learning and path integration. We conducted experiments to reveal the role of ocelli in the perception and use of celestial compass information and landmark guidance. Ants with directional information from their path integration system were tested with covered compound eyes and open ocelli on an unfamiliar test field where only celestial compass cues were available for homing. These full-vector ants, using only their ocelli for visual information, oriented significantly towards the fictive nest on the test field, indicating the use of celestial compass information that is presumably based on polarised skylight, the sun's position or the colour gradient of the sky. Ants without any directional information from their path-integration system (zero-vector) were tested, also with covered compound eyes and open ocelli, on a familiar training field where they have to use the surrounding panorama to home. These ants failed to orient significantly in the homeward direction. Together, our results demonstrated that M. bagoti could perceive and process celestial compass information for directional orientation with their ocelli. In contrast, the ocelli do not seem to contribute to terrestrial landmark-based navigation in M. bagoti.
Jeong, Jeho; Chen, Qing; Febo, Robert; Yang, Jie; Pham, Hai; Xiong, Jian-Ping; Zanzonico, Pat B.; Deasy, Joseph O.; Humm, John L.; Mageras, Gig S.
2016-01-01
Although spatially precise systems are now available for small-animal irradiations, there are currently limited software tools available for treatment planning for such irradiations. We report on the adaptation, commissioning, and evaluation of a 3-dimensional treatment planning system for use with a small-animal irradiation system. The 225-kV X-ray beam of the X-RAD 225Cx microirradiator (Precision X-Ray) was commissioned using both ion-chamber and radiochromic film for 10 different collimators ranging in field size from 1 mm in diameter to 40 × 40 mm2. A clinical 3-dimensional treatment planning system (Metropolis) developed at our institution was adapted to small-animal irradiation by making it compatible with the dimensions of mice and rats, modeling the microirradiator beam orientations and collimators, and incorporating the measured beam data for dose calculation. Dose calculations in Metropolis were verified by comparison with measurements in phantoms. Treatment plans for irradiation of a tumor-bearing mouse were generated with both the Metropolis and the vendor-supplied software. The calculated beam-on times and the plan evaluation tools were compared. The dose rate at the central axis ranges from 74 to 365 cGy/min depending on the collimator size. Doses calculated with Metropolis agreed with phantom measurements within 3% for all collimators. The beam-on times calculated by Metropolis and the vendor-supplied software agreed within 1% at the isocenter. The modified 3-dimensional treatment planning system provides better visualization of the relationship between the X-ray beams and the small-animal anatomy as well as more complete dosimetric information on target tissues and organs at risk. It thereby enhances the potential of image-guided microirradiator systems for evaluation of dose–response relationships and for preclinical experimentation generally. PMID:25948321
Bublitz, Alexander; Weinhold, Severine R.; Strobel, Sophia; Dehnhardt, Guido; Hanke, Frederike D.
2017-01-01
Octopuses (Octopus vulgaris) are generally considered to possess extraordinary cognitive abilities including the ability to successfully perform in a serial reversal learning task. During reversal learning, an animal is presented with a discrimination problem and after reaching a learning criterion, the signs of the stimuli are reversed: the former positive becomes the negative stimulus and vice versa. If an animal improves its performance over reversals, it is ascribed advanced cognitive abilities. Reversal learning has been tested in octopus in a number of studies. However, the experimental procedures adopted in these studies involved pre-training on the new positive stimulus after a reversal, strong negative reinforcement or might have enabled secondary cueing by the experimenter. These procedures could have all affected the outcome of reversal learning. Thus, in this study, serial visual reversal learning was revisited in octopus. We trained four common octopuses (O. vulgaris) to discriminate between 2-dimensional stimuli presented on a monitor in a simultaneous visual discrimination task and reversed the signs of the stimuli each time the animals reached the learning criterion of ≥80% in two consecutive sessions. The animals were trained using operant conditioning techniques including a secondary reinforcer, a rod that was pushed up and down the feeding tube, which signaled the correctness of a response and preceded the subsequent primary reinforcement of food. The experimental protocol did not involve negative reinforcement. One animal completed four reversals and showed progressive improvement, i.e., it decreased its errors to criterion the more reversals it experienced. This animal developed a generalized response strategy. In contrast, another animal completed only one reversal, whereas two animals did not learn to reverse during the first reversal. In conclusion, some octopus individuals can learn to reverse in a visual task demonstrating behavioral flexibility even with a refined methodology. PMID:28223940
Developing and Evaluating Animations for Teaching Quantum Mechanics Concepts
ERIC Educational Resources Information Center
Kohnle, Antje; Douglass, Margaret; Edwards, Tom J.; Gillies, Alastair D.; Hooley, Christopher A.; Sinclair, Bruce D.
2010-01-01
In this paper, we describe animations and animated visualizations for introductory and intermediate-level quantum mechanics instruction developed at the University of St Andrews. The animations aim to help students build mental representations of quantum mechanics concepts. They focus on known areas of student difficulty and misconceptions by…
ERIC Educational Resources Information Center
Romero-Hall, Enilda; Watson, Ginger; Papelis, Yiannnis
2014-01-01
To examine the visual attention, emotional responses, learning, perceptions and attitudes of learners interacting with an animated pedagogical agent, this study compared a multimedia learning environment with an emotionally-expressive animated pedagogical agent, with a non-expressive animated pedagogical agent, and without an agent. Visual…
Caudell, Thomas P; Xiao, Yunhai; Healy, Michael J
2003-01-01
eLoom is an open source graph simulation software tool, developed at the University of New Mexico (UNM), that enables users to specify and simulate neural network models. Its specification language and libraries enables users to construct and simulate arbitrary, potentially hierarchical network structures on serial and parallel processing systems. In addition, eLoom is integrated with UNM's Flatland, an open source virtual environments development tool to provide real-time visualizations of the network structure and activity. Visualization is a useful method for understanding both learning and computation in artificial neural networks. Through 3D animated pictorially representations of the state and flow of information in the network, a better understanding of network functionality is achieved. ART-1, LAPART-II, MLP, and SOM neural networks are presented to illustrate eLoom and Flatland's capabilities.
A multimodal detection model of dolphins to estimate abundance validated by field experiments.
Akamatsu, Tomonari; Ura, Tamaki; Sugimatsu, Harumi; Bahl, Rajendar; Behera, Sandeep; Panda, Sudarsan; Khan, Muntaz; Kar, S K; Kar, C S; Kimura, Satoko; Sasaki-Yamamoto, Yukiko
2013-09-01
Abundance estimation of marine mammals requires matching of detection of an animal or a group of animal by two independent means. A multimodal detection model using visual and acoustic cues (surfacing and phonation) that enables abundance estimation of dolphins is proposed. The method does not require a specific time window to match the cues of both means for applying mark-recapture method. The proposed model was evaluated using data obtained in field observations of Ganges River dolphins and Irrawaddy dolphins, as examples of dispersed and condensed distributions of animals, respectively. The acoustic detection probability was approximately 80%, 20% higher than that of visual detection for both species, regardless of the distribution of the animals in present study sites. The abundance estimates of Ganges River dolphins and Irrawaddy dolphins fairly agreed with the numbers reported in previous monitoring studies. The single animal detection probability was smaller than that of larger cluster size, as predicted by the model and confirmed by field data. However, dense groups of Irrawaddy dolphins showed difference in cluster sizes observed by visual and acoustic methods. Lower detection probability of single clusters of this species seemed to be caused by the clumped distribution of this species.
ERIC Educational Resources Information Center
Korakakis, G.; Pavlatou, E. A.; Palyvos, J. A.; Spyrellis, N.
2009-01-01
This research aims to determine whether the use of specific types of visualization (3D illustration, 3D animation, and interactive 3D animation) combined with narration and text, contributes to the learning process of 13- and 14- years-old students in science courses. The study was carried out with 212 8th grade students in Greece. This…
Pasqualotti, Léa; Baccino, Thierry
2014-01-01
Most of studies about online advertisements have indicated that they have a negative impact on users' cognitive processes, especially when they include colorful or animated banners and when they are close to the text to be read. In the present study we assessed the effects of two advertisements features-distance from the text and the animation-on visual strategies during a word-search task and a reading-for-comprehension task using Web-like pages. We hypothesized that the closer the advertisement was to the target text, the more cognitive processing difficulties it would cause. We also hypothesized that (1) animated banners would be more disruptive than static advertisements and (2) banners would have more effect on word-search performance than reading-for-comprehension performance. We used an automatic classifier to assess variations in use of Scanning and Reading visual strategies during task performance. The results showed that the effect of dynamic and static advertisements on visual strategies varies according to the task. Fixation duration indicated that the closest advertisements slowed down information processing but there was no difference between the intermediate (40 pixel) and far (80 pixel) distance conditions. Our findings suggest that advertisements have a negative impact on users' performance mostly when a lots of cognitive resources are required as for reading-for-comprehension.
High-quality and interactive animations of 3D time-varying vector fields.
Helgeland, Anders; Elboth, Thomas
2006-01-01
In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.
Visualizing Protein Interactions and Dynamics: Evolving a Visual Language for Molecular Animation
Jenkinson, Jodie; McGill, Gaël
2012-01-01
Undergraduate biology education provides students with a number of learning challenges. Subject areas that are particularly difficult to understand include protein conformational change and stability, diffusion and random molecular motion, and molecular crowding. In this study, we examined the relative effectiveness of three-dimensional visualization techniques for learning about protein conformation and molecular motion in association with a ligand–receptor binding event. Increasingly complex versions of the same binding event were depicted in each of four animated treatments. Students (n = 131) were recruited from the undergraduate biology program at University of Toronto, Mississauga. Visualization media were developed in the Center for Molecular and Cellular Dynamics at Harvard Medical School. Stem cell factor ligand and cKit receptor tyrosine kinase were used as a classical example of a ligand-induced receptor dimerization and activation event. Each group completed a pretest, viewed one of four variants of the animation, and completed a posttest and, at 2 wk following the assessment, a delayed posttest. Overall, the most complex animation was the most effective at fostering students' understanding of the events depicted. These results suggest that, in select learning contexts, increasingly complex representations may be more desirable for conveying the dynamic nature of cell binding events. PMID:22383622
A Novel, Real-Time, In Vivo Mouse Retinal Imaging System
Butler, Mark C.; Sullivan, Jack M.
2015-01-01
Purpose To develop an efficient, low-cost instrument for robust real-time imaging of the mouse retina in vivo, and assess system capabilities by evaluating various animal models. Methods Following multiple disappointing attempts to visualize the mouse retina during a subretinal injection using commercially available systems, we identified the key limitation to be inadequate illumination due to off axis illumination and poor optical train optimization. Therefore, we designed a paraxial illumination system for Greenough-type stereo dissecting microscope incorporating an optimized optical launch and an efficiently coupled fiber optic delivery system. Excitation and emission filters control spectral bandwidth. A color coupled-charged device (CCD) camera is coupled to the microscope for image capture. Although, field of view (FOV) is constrained by the small pupil aperture, the high optical power of the mouse eye, and the long working distance (needed for surgical manipulations), these limitations can be compensated by eye positioning in order to observe the entire retina. Results The retinal imaging system delivers an adjustable narrow beam to the dilated pupil with minimal vignetting. The optic nerve, vasculature, and posterior pole are crisply visualized and the entire retina can be observed through eye positioning. Normal and degenerative retinal phenotypes can be followed over time. Subretinal or intraocular injection procedures are followed in real time. Real-time, intravenous fluorescein angiography for the live mouse has been achieved. Conclusions A novel device is established for real-time viewing and image capture of the small animal retina during subretinal injections for preclinical gene therapy studies. PMID:26551329
Visual orientation by the crown-of-thorns starfish ( Acanthaster planci)
NASA Astrophysics Data System (ADS)
Petie, Ronald; Hall, Michael R.; Hyldahl, Mia; Garm, Anders
2016-12-01
Photoreception in echinoderms has been known for over 200 years, but their visual capabilities remain poorly understood. As has been reported for some asteroids, the crown-of-thorns starfish ( Acanthaster planci) possess a seemingly advanced eye at the tip of each of its 7-23 arms. With such an array of eyes, the starfish can integrate a wide field of view of its surroundings. We hypothesise that, at close range, orientation and directional movements of the crown-of-thorns starfish are visually guided. In this study, the eyes and vision of A. planci were examined by means of light microscopy, electron microscopy, underwater goniometry, electroretinograms and behavioural experiments in the animals' natural habitat. We found that only animals with intact vision could orient to a nearby coral reef, whereas blinded animals, with olfaction intact, walked in random directions. The eye had peak sensitivity in the blue part (470 nm) of the visual spectrum and a narrow, horizontal visual field of approximately 100° wide and 30° high. With approximately 250 ommatidia in each adult compound eye and average interommatidial angles of 8°, crown-of-thorns starfish have the highest spatial resolution of any starfish studied to date. In addition, they have the slowest vision of all animals examined thus far, with a flicker fusion frequency of only 0.6-0.7 Hz. This may be adaptive as fast vision is not required for the detection of stationary objects such as reefs. In short, the eyes seem optimised for detecting large, dark, stationary objects contrasted against an ocean blue background. Our results show that the visual sense of the crown-of-thorns starfish is much more elaborate than has been thus far appreciated and is essential for orientation and localisation of suitable habitats.
Fiber optic sensing technology for detecting gas hydrate formation and decomposition.
Rawn, C J; Leeman, J R; Ulrich, S M; Alford, J E; Phelps, T J; Madden, M E
2011-02-01
A fiber optic-based distributed sensing system (DSS) has been integrated with a large volume (72 l) pressure vessel providing high spatial resolution, time-resolved, 3D measurement of hybrid temperature-strain (TS) values within experimental sediment-gas hydrate systems. Areas of gas hydrate formation (exothermic) and decomposition (endothermic) can be characterized through this proxy by time series analysis of discrete data points collected along the length of optical fibers placed within a sediment system. Data are visualized as an animation of TS values along the length of each fiber over time. Experiments conducted in the Seafloor Process Simulator at Oak Ridge National Laboratory clearly indicate hydrate formation and dissociation events at expected pressure-temperature conditions given the thermodynamics of the CH(4)-H(2)O system. The high spatial resolution achieved with fiber optic technology makes the DSS a useful tool for visualizing time-resolved formation and dissociation of gas hydrates in large-scale sediment experiments.
Fiber optic sensing technology for detecting gas hydrate formation and decomposition
NASA Astrophysics Data System (ADS)
Rawn, C. J.; Leeman, J. R.; Ulrich, S. M.; Alford, J. E.; Phelps, T. J.; Madden, M. E.
2011-02-01
A fiber optic-based distributed sensing system (DSS) has been integrated with a large volume (72 l) pressure vessel providing high spatial resolution, time-resolved, 3D measurement of hybrid temperature-strain (TS) values within experimental sediment-gas hydrate systems. Areas of gas hydrate formation (exothermic) and decomposition (endothermic) can be characterized through this proxy by time series analysis of discrete data points collected along the length of optical fibers placed within a sediment system. Data are visualized as an animation of TS values along the length of each fiber over time. Experiments conducted in the Seafloor Process Simulator at Oak Ridge National Laboratory clearly indicate hydrate formation and dissociation events at expected pressure-temperature conditions given the thermodynamics of the CH4-H2O system. The high spatial resolution achieved with fiber optic technology makes the DSS a useful tool for visualizing time-resolved formation and dissociation of gas hydrates in large-scale sediment experiments.
Visualizations and Mental Models - The Educational Implications of GEOWALL
NASA Astrophysics Data System (ADS)
Rapp, D.; Kendeou, P.
2003-12-01
Work in the earth sciences has outlined many of the faulty beliefs that students possess concerning particular geological systems and processes. Evidence from educational and cognitive psychology has demonstrated that students often have difficulty overcoming their na‹ve beliefs about science. Prior knowledge is often remarkably resistant to change, particularly when students' existing mental models for geological principles may be faulty or inaccurate. Figuring out how to help students revise their mental models to include appropriate information is a major challenge. Up until this point, research has tended to focus on whether 2-dimensional computer visualizations are useful tools for helping students develop scientifically correct models. Research suggests that when students are given the opportunity to use dynamic computer-based visualizations, they are more likely to recall the learned information, and are more likely to transfer that knowledge to novel settings. Unfortunately, 2-dimensional visualization systems are often inadequate representations of the material that educators would like students to learn. For example, a 2-dimensional image of the Earth's surface does not adequately convey particular features that are critical for visualizing the geological environment. This may limit the models that students can construct following these visualizations. GEOWALL is a stereo projection system that has attempted to address this issue. It can display multidimensional static geologic images and dynamic geologic animations in a 3-dimensional format. Our current research examines whether multidimensional visualization systems such as GEOWALL may facilitate learning by helping students to develop more complex mental models. This talk will address some of the cognitive issues that influence the construction of mental models, and the difficulty of updating existing mental models. We will also discuss our current work that seeks to examine whether GEOWALL is an effective tool for helping students to learn geological information (and potentially restructure their na‹ve conceptions of geologic principles).
Food and conspecific chemical cues modify visual behavior of zebrafish, Danio rerio.
Stephenson, Jessica F; Partridge, Julian C; Whitlock, Kathleen E
2012-06-01
Animals use the different qualities of olfactory and visual sensory information to make decisions. Ethological and electrophysiological evidence suggests that there is cross-modal priming between these sensory systems in fish. We present the first experimental study showing that ecologically relevant chemical mixtures alter visual behavior, using adult male and female zebrafish, Danio rerio. Neutral-density filters were used to attenuate the light reaching the tank to an initial light intensity of 2.3×10(16) photons/s/m2. Fish were exposed to food cue and to alarm cue. The light intensity was then increased by the removal of one layer of filter (nominal absorbance 0.3) every minute until, after 10 minutes, the light level was 15.5×10(16) photons/s/m2. Adult male and female zebrafish responded to a moving visual stimulus at lower light levels if they had been first exposed to food cue, or to conspecific alarm cue. These results suggest the need for more integrative studies of sensory biology.
Marzullo, Timothy Charles; Lehmkuhle, Mark J; Gage, Gregory J; Kipke, Daryl R
2010-04-01
Closed-loop neural interface technology that combines neural ensemble decoding with simultaneous electrical microstimulation feedback is hypothesized to improve deep brain stimulation techniques, neuromotor prosthetic applications, and epilepsy treatment. Here we describe our iterative results in a rat model of a sensory and motor neurophysiological feedback control system. Three rats were chronically implanted with microelectrode arrays in both the motor and visual cortices. The rats were subsequently trained over a period of weeks to modulate their motor cortex ensemble unit activity upon delivery of intra-cortical microstimulation (ICMS) of the visual cortex in order to receive a food reward. Rats were given continuous feedback via visual cortex ICMS during the response periods that was representative of the motor cortex ensemble dynamics. Analysis revealed that the feedback provided the animals with indicators of the behavioral trials. At the hardware level, this preparation provides a tractable test model for improving the technology of closed-loop neural devices.
Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras
Kane, Suzanne Amador; Zamani, Marjon
2014-01-01
This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots. PMID:24431144
Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras.
Kane, Suzanne Amador; Zamani, Marjon
2014-01-15
This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots.
Identification of non-visual photomotor response cells in the vertebrate hindbrain
Kokel, David; Dunn, Timothy W.; Ahrens, Misha B.; Alshut, Rüdiger; Cheung, Chung Yan J.; Saint-Amant, Louis; Bruni, Giancarlo; Mateus, Rita; van Ham, Tjakko J.; Shiraki, Tomoya; Fukada, Yoshitaka; Kojima, Daisuke; Yeh, Jing-Ruey J.; Mikut, Ralf; von Lintig, Johannes; Engert, Florian; Peterson, Randall T.
2013-01-01
Non-visual photosensation enables animals to sense light without sight. However, the cellular and molecular mechanisms of non-visual photobehaviors are poorly understood, especially in vertebrate animals. Here, we describe the photomotor response (PMR), a robust and reproducible series of motor behaviors in zebrafish that is elicited by visual wavelengths of light, but does not require the eyes, pineal gland or other canonical deep-brain photoreceptive organs. Unlike the relatively slow effects of canonical non-visual pathways, motor circuits are strongly and quickly (seconds) recruited during the PMR behavior. We find that the hindbrain is both necessary and sufficient to drive these behaviors. Using in vivo calcium imaging, we identify a discrete set of neurons within the hindbrain whose responses to light mirror the PMR behavior. Pharmacological inhibition of the visual cycle blocks PMR behaviors, suggesting that opsin-based photoreceptors control this behavior. These data represent the first known light-sensing circuit in the vertebrate hindbrain. PMID:23447595
Neuro-ophthalmic manifestations of cerebrovascular accidents.
Ghannam, Alaa S Bou; Subramanian, Prem S
2017-11-01
Ocular functions can be affected in almost any type of cerebrovascular accident (CVA) creating a burden on the patient and family and limiting functionality. The present review summarizes the different ocular outcomes after stroke, divided into three categories: vision, ocular motility, and visual perception. We also discuss interventions that have been proposed to help restore vision and perception after CVA. Interventions that might help expand or compensate for visual field loss and visuospatial neglect include explorative saccade training, prisms, visual restoration therapy (VRT), and transcranial direct current stimulation (tDCS). VRT makes use of neuroplasticity, which has shown efficacy in animal models but remains controversial in human studies. CVAs can lead to decreased visual acuity, visual field loss, ocular motility abnormalities, and visuospatial perception deficits. Although ocular motility problems can be corrected with surgery, vision, and perception deficits are more difficult to overcome. Interventions to restore or compensate for visual field deficits are controversial despite theoretical underpinnings, animal model evidence, and case reports of their efficacies.
Chikayama, Eisuke; Suto, Michitaka; Nishihara, Takashi; Shinozaki, Kazuo; Hirayama, Takashi; Kikuchi, Jun
2008-01-01
Background Metabolic phenotyping has become an important ‘bird's-eye-view’ technology which can be applied to higher organisms, such as model plant and animal systems in the post-genomics and proteomics era. Although genotyping technology has expanded greatly over the past decade, metabolic phenotyping has languished due to the difficulty of ‘top-down’ chemical analyses. Here, we describe a systematic NMR methodology for stable isotope-labeling and analysis of metabolite mixtures in plant and animal systems. Methodology/Principal Findings The analysis method includes a stable isotope labeling technique for use in living organisms; a systematic method for simultaneously identifying a large number of metabolites by using a newly developed HSQC-based metabolite chemical shift database combined with heteronuclear multidimensional NMR spectroscopy; Principal Components Analysis; and a visualization method using a coarse-grained overview of the metabolic system. The database contains more than 1000 1H and 13C chemical shifts corresponding to 142 metabolites measured under identical physicochemical conditions. Using the stable isotope labeling technique in Arabidopsis T87 cultured cells and Bombyx mori, we systematically detected >450 HSQC peaks in each 13C-HSQC spectrum derived from model plant, Arabidopsis T87 cultured cells and the invertebrate animal model Bombyx mori. Furthermore, for the first time, efficient 13C labeling has allowed reliable signal assignment using analytical separation techniques such as 3D HCCH-COSY spectra in higher organism extracts. Conclusions/Significance Overall physiological changes could be detected and categorized in relation to a critical developmental phase change in B. mori by coarse-grained representations in which the organization of metabolic pathways related to a specific developmental phase was visualized on the basis of constituent changes of 56 identified metabolites. Based on the observed intensities of 13C atoms of given metabolites on development-dependent changes in the 56 identified 13C-HSQC signals, we have determined the changes in metabolic networks that are associated with energy and nitrogen metabolism. PMID:19030231
Pichaud, F; Desplan, C
2001-03-01
The Drosophila eye is widely used as a model system to study neuronal differentiation, survival and axon projection. Photoreceptor differentiation starts with the specification of a founder cell R8, which sequentially recruits other photoreceptor neurons to the ommatidium. The eight photoreceptors that compose each ommatidium exist in two chiral forms organized along two axes of symmetry and this pattern represents a paradigm to study tissue polarity. We have developed a method of fluoroscopy to visualize the different types of photoreceptors and the organization of the ommatidia in living animals. This allowed us to perform an F(1) genetic screen to isolate mutants affecting photoreceptor differentiation, survival or planar polarity. We illustrate the power of this detection system using known genetic backgrounds and new mutations that affect ommatidial differentiation, morphology or chirality.
Butler, Blake E; Chabot, Nicole; Kral, Andrej; Lomber, Stephen G
2017-01-01
Crossmodal plasticity takes place following sensory loss, such that areas that normally process the missing modality are reorganized to provide compensatory function in the remaining sensory systems. For example, congenitally deaf cats outperform normal hearing animals on localization of visual stimuli presented in the periphery, and this advantage has been shown to be mediated by the posterior auditory field (PAF). In order to determine the nature of the anatomical differences that underlie this phenomenon, we injected a retrograde tracer into PAF of congenitally deaf animals and quantified the thalamic and cortical projections to this field. The pattern of projections from areas throughout the brain was determined to be qualitatively similar to that previously demonstrated in normal hearing animals, but with twice as many projections arising from non-auditory cortical areas. In addition, small ectopic projections were observed from a number of fields in visual cortex, including areas 19, 20a, 20b, and 21b, and area 7 of parietal cortex. These areas did not show projections to PAF in cats deafened ototoxically near the onset of hearing, and provide a possible mechanism for crossmodal reorganization of PAF. These, along with the possible contributions of other mechanisms, are considered. Copyright © 2016 Elsevier B.V. All rights reserved.
Comparing the Organs and Vasculature of the Head and Neck in Five Murine Species
JAE KIM, MIN; YEON KIM, YOO; REN CHAO, JANET; SANG PARK, HAE; CHANG, JIWON; OH, DAWOON; JUN LEE, JAE; CHUN KANG, TAE; SUH, JUN-GYO; HO LEE, JUN
2017-01-01
Background/Aim: The purpose of the present study was to delineate the cervical and facial vascular and associated anatomy in five murine species, and compare them for optimal use in research studies focused on understanding the pathology and treatment of diseases in humans. Materials and Methods: The specific adult male animals examined were mice (C57BL/6J), rats (F344), mongolian gerbils (Merionesunguiculatus), hamsters (Syrian), and guinea pigs (Hartley). To stain the vasculature and organs, of the face and neck, each animal was systemically perfused using the vital stain, Trypan Blue. Following this step, the detailed anatomy of the head and neck could be easily visualized in all species. Results: Unique morphological characteristics were demonstrated by comparing the five species, including symmetry of the common carotid origin bilaterally in the Mongolian Gerbil, a large submandibular gland in the hamster and an enlarged buccal branch in the Guinea Pig. In reviewing the anatomical details, this staining technique proves superior for direct surgical visualization and identification. Conclusion: The anatomical details provided through these five species atlas will help experimental researchers in the future to select the most appropriate animal model for specific laboratory studies aimed to improve our understanding and treatment of diseases in patients. PMID:28882952
Hippocampal place cell instability after lesions of the head direction cell network
NASA Technical Reports Server (NTRS)
Calton, Jeffrey L.; Stackman, Robert W.; Goodridge, Jeremy P.; Archey, William B.; Dudchenko, Paul A.; Taube, Jeffrey S.; Oman, C. M. (Principal Investigator)
2003-01-01
The occurrence of cells that encode spatial location (place cells) or head direction (HD cells) in the rat limbic system suggests that these cell types are important for spatial navigation. We sought to determine whether place fields of hippocampal CA1 place cells would be altered in animals receiving lesions of brain areas containing HD cells. Rats received bilateral lesions of anterodorsal thalamic nuclei (ADN), postsubiculum (PoS), or sham lesions, before place cell recording. Although place cells from lesioned animals did not differ from controls on many place-field characteristics, such as place-field size and infield firing rate, the signal was significantly degraded with respect to measures of outfield firing rate, spatial coherence, and information content. Surprisingly, place cells from lesioned animals were more likely modulated by the directional heading of the animal. Rotation of the landmark cue showed that place fields from PoS-lesioned animals were not controlled by the cue and shifted unpredictably between sessions. Although fields from ADN-lesioned animals tended to have less landmark control than fields from control animals, this impairment was mild compared with cells recorded from PoS-lesioned animals. Removal of the prominent visual cue also led to instability of place-field representations in PoS-lesioned, but not ADN-lesioned, animals. Together, these findings suggest that an intact HD system is not necessary for the maintenance of place fields, but lesions of brain areas that convey the HD signal can degrade this signal, and lesions of the PoS might lead to perceptual or mnemonic deficits, leading to place-field instability between sessions.
Chemistry and biology of the initial steps in vision: the Friedenwald lecture.
Palczewski, Krzysztof
2014-10-22
Visual transduction is the process in the eye whereby absorption of light in the retina is translated into electrical signals that ultimately reach the brain. The first challenge presented by visual transduction is to understand its molecular basis. We know that maintenance of vision is a continuous process requiring the activation and subsequent restoration of a vitamin A-derived chromophore through a series of chemical reactions catalyzed by enzymes in the retina and retinal pigment epithelium (RPE). Diverse biochemical approaches that identified key proteins and reactions were essential to achieve a mechanistic understanding of these visual processes. The three-dimensional arrangements of these enzymes' polypeptide chains provide invaluable insights into their mechanisms of action. A wealth of information has already been obtained by solving high-resolution crystal structures of both rhodopsin and the retinoid isomerase from pigment RPE (RPE65). Rhodopsin, which is activated by photoisomerization of its 11-cis-retinylidene chromophore, is a prototypical member of a large family of membrane-bound proteins called G protein-coupled receptors (GPCRs). RPE65 is a retinoid isomerase critical for regeneration of the chromophore. Electron microscopy (EM) and atomic force microscopy have provided insights into how certain proteins are assembled to form much larger structures such as rod photoreceptor cell outer segment membranes. A second challenge of visual transduction is to use this knowledge to devise therapeutic approaches that can prevent or reverse conditions leading to blindness. Imaging modalities like optical coherence tomography (OCT) and scanning laser ophthalmoscopy (SLO) applied to appropriate animal models as well as human retinal imaging have been employed to characterize blinding diseases, monitor their progression, and evaluate the success of therapeutic agents. Lately two-photon (2-PO) imaging, together with biochemical assays, are revealing functional aspects of vision at a new molecular level. These multidisciplinary approaches combined with suitable animal models and inbred mutant species can be especially helpful in translating provocative cell and tissue culture findings into therapeutic options for further development in animals and eventually in humans. A host of different approaches and techniques is required for substantial progress in understanding fundamental properties of the visual system. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Time-series animation techniques for visualizing urban growth
Acevedo, W.; Masuoka, P.
1997-01-01
Time-series animation is a visually intuitive way to display urban growth. Animations of landuse change for the Baltimore-Washington region were generated by showing a series of images one after the other in sequential order. Before creating an animation, various issues which will affect the appearance of the animation should be considered, including the number of original data frames to use, the optimal animation display speed, the number of intermediate frames to create between the known frames, and the output media on which the animations will be displayed. To create new frames between the known years of data, the change in each theme (i.e. urban development, water bodies, transportation routes) must be characterized and an algorithm developed to create the in-between frames. Example time-series animations were created using a temporal GIS database of the Baltimore-Washington area. Creating the animations involved generating raster images of the urban development, water bodies, and principal transportation routes; overlaying the raster images on a background image; and importing the frames to a movie file. Three-dimensional perspective animations were created by draping each image over digital elevation data prior to importing the frames to a movie file. ?? 1997 Elsevier Science Ltd.
Animations Need Narrations: An Experimental Test of a Dual-Coding Hypothesis.
ERIC Educational Resources Information Center
Mayer, Richard E.; Anderson, Richard B.
1991-01-01
In two experiments, 102 mechanically naive college students viewed an animation on bicycle tire pump operation with a verbal description before or during the animation or without description. Improved performance of those receiving description during the animation supports a dual-coding hypothesis of connections between visual and verbal stimuli.…
Planetary Education and Outreach Using the NOAA Science on a Sphere
NASA Technical Reports Server (NTRS)
Simon-Miller, A. A.; Williams, D. R.; Smith, S. M.; Friedlander, J. S.; Mayo, L. A.; Clark, P. E.; Henderson, M. A.
2011-01-01
Science On a Sphere (SOS) is a large visualization system, developed by the National Oceanic and Atmospheric Administration (NOAH), that uses computers running Redhat Linux and four video projectors to display animated data onto the outside of a sphere. Said another way, SOS is a stationary globe that can show dynamic, animated images in spherical form. Visualization of cylindrical data maps show planets, their atmosphere, oceans, and land, in very realistic form. The SOS system uses 4 video projectors to display images onto the sphere. Each projector is driven by a separate computer, and a fifth computer is used to control the operation of the display computers. Each computer is a relatively powerful PC with a high-end graphics card. The video projectors have native XGA resolution. The projectors are placed at the corners of a 30' x 30' square with a 68" carbon fiber sphere suspended in the center of the square. The equator of the sphere is typically located 86" off the floor. SOS uses common image formats such as JPEG, or TIFF in a very specific, but simple form; the images are plotted on an equatorial cylindrical equidistant projection, or as it is commonly known, a latitude/longitude grid, where the image is twice as wide as it is high (rectangular). 2048x] 024 is the minimum usable spatial resolution without some noticeable pixelation. Labels and text can be applied within the image, or using a timestamp-like feature within the SOS system software. There are two basic modes of operation for SOS: displaying a single image or an animated sequence of frames. The frame or frames can be setup to rotate or tilt, as in a planetary rotation. Sequences of images that animate through time produce a movie visualization, with or without an overlain soundtrack. After the images are processed, SOS will display the images in sequence and play them like a movie across the entire sphere surface. Movies can be of any arbitrary length, limited mainly by disk space and can be animated at frame rates up to 30 frames per second. Transitions, special effects, and other computer graphics techniques can be added to a sequence through the use of off-the-shelf software, like Final Cut Pro. However, one drawback is that the Sphere cannot be used in the same manner as a flat movie screen; images cannot be pushed to a "side", a highlighted area must be viewable to all sides of the room simultaneously, and some transitions do not work as well as others. We discuss these issues and workarounds in our poster.
Smith, Earl L.
2013-01-01
In order to develop effective optical treatment strategies for myopia, it is important to understand how visual experience influences refractive development. Beginning with the discovery of the phenomenon of form deprivation myopia, research involving many animal species has demonstrated that refractive development is regulated by visual feedback. In particular, animal studies have shown that optically imposed myopic defocus slows axial elongation, that the effects of vision are dominated by local retinal mechanisms, and that peripheral vision can dominate central refractive development. In this review, the results obtained from clinical trials of traditional optical treatment strategies employed in efforts to slow myopia progression in children are interpreted in light of the results from animal studies and are compared to the emerging results from preliminary clinical studies of optical treatment strategies that manipulate the effective focus of the peripheral retina. Overall, the results suggest that imposed myopic defocus can slow myopia progression in children and that the effectiveness of an optical treatment strategy in reducing myopia progression is influenced by the extent of the visual field that is manipulated. PMID:23290590
Role of Self-Generated Odor Cues in Contextual Representation
Aikath, Devdeep; Weible, Aldis P; Rowland, David C; Kentros, Clifford G
2014-01-01
As first demonstrated in the patient H.M., the hippocampus is critically involved in forming episodic memories, the recall of “what” happened “where” and “when.” In rodents, the clearest functional correlate of hippocampal primary neurons is the place field: a cell fires predominantly when the animal is in a specific part of the environment, typically defined relative to the available visuospatial cues. However, rodents have relatively poor visual acuity. Furthermore, they are highly adept at navigating in total darkness. This raises the question of how other sensory modalities might contribute to a hippocampal representation of an environment. Rodents have a highly developed olfactory system, suggesting that cues such as odor trails may be important. To test this, we familiarized mice to a visually cued environment over a number of days while maintaining odor cues. During familiarization, self-generated odor cues unique to each animal were collected by re-using absorbent paperboard flooring from one session to the next. Visual and odor cues were then put in conflict by counter-rotating the recording arena and the flooring. Perhaps surprisingly, place fields seemed to follow the visual cue rotation exclusively, raising the question of whether olfactory cues have any influence at all on a hippocampal spatial representation. However, subsequent removal of the familiar, self-generated odor cues severely disrupted both long-term stability and rotation to visual cues in a novel environment. Our data suggest that odor cues, in the absence of additional rule learning, do not provide a discriminative spatial signal that anchors place fields. Such cues do, however, become integral to the context over time and exert a powerful influence on the stability of its hippocampal representation. © 2014 The Authors. Hippocampus Published by Wiley Periodicals, Inc. PMID:24753119
Innovative Visualizations Shed Light on Avian Nocturnal Migration.
Shamoun-Baranes, Judy; Farnsworth, Andrew; Aelterman, Bart; Alves, Jose A; Azijn, Kevin; Bernstein, Garrett; Branco, Sérgio; Desmet, Peter; Dokter, Adriaan M; Horton, Kyle; Kelling, Steve; Kelly, Jeffrey F; Leijnse, Hidde; Rong, Jingjing; Sheldon, Daniel; Van den Broeck, Wouter; Van Den Meersche, Jan Klaas; Van Doren, Benjamin Mark; van Gasteren, Hans
2016-01-01
Globally, billions of flying animals undergo seasonal migrations, many of which occur at night. The temporal and spatial scales at which migrations occur and our inability to directly observe these nocturnal movements makes monitoring and characterizing this critical period in migratory animals' life cycles difficult. Remote sensing, therefore, has played an important role in our understanding of large-scale nocturnal bird migrations. Weather surveillance radar networks in Europe and North America have great potential for long-term low-cost monitoring of bird migration at scales that have previously been impossible to achieve. Such long-term monitoring, however, poses a number of challenges for the ornithological and ecological communities: how does one take advantage of this vast data resource, integrate information across multiple sensors and large spatial and temporal scales, and visually represent the data for interpretation and dissemination, considering the dynamic nature of migration? We assembled an interdisciplinary team of ecologists, meteorologists, computer scientists, and graphic designers to develop two different flow visualizations, which are interactive and open source, in order to create novel representations of broad-front nocturnal bird migration to address a primary impediment to long-term, large-scale nocturnal migration monitoring. We have applied these visualization techniques to mass bird migration events recorded by two different weather surveillance radar networks covering regions in Europe and North America. These applications show the flexibility and portability of such an approach. The visualizations provide an intuitive representation of the scale and dynamics of these complex systems, are easily accessible for a broad interest group, and are biologically insightful. Additionally, they facilitate fundamental ecological research, conservation, mitigation of human-wildlife conflicts, improvement of meteorological products, and public outreach, education, and engagement.
Pinto, Joshua G. A.; Jones, David G.; Williams, C. Kate; Murphy, Kathryn M.
2015-01-01
Although many potential neuroplasticity based therapies have been developed in the lab, few have translated into established clinical treatments for human neurologic or neuropsychiatric diseases. Animal models, especially of the visual system, have shaped our understanding of neuroplasticity by characterizing the mechanisms that promote neural changes and defining timing of the sensitive period. The lack of knowledge about development of synaptic plasticity mechanisms in human cortex, and about alignment of synaptic age between animals and humans, has limited translation of neuroplasticity therapies. In this study, we quantified expression of a set of highly conserved pre- and post-synaptic proteins (Synapsin, Synaptophysin, PSD-95, Gephyrin) and found that synaptic development in human primary visual cortex (V1) continues into late childhood. Indeed, this is many years longer than suggested by neuroanatomical studies and points to a prolonged sensitive period for plasticity in human sensory cortex. In addition, during childhood we found waves of inter-individual variability that are different for the four proteins and include a stage during early development (<1 year) when only Gephyrin has high inter-individual variability. We also found that pre- and post-synaptic protein balances develop quickly, suggesting that maturation of certain synaptic functions happens within the 1 year or 2 of life. A multidimensional analysis (principle component analysis) showed that most of the variance was captured by the sum of the four synaptic proteins. We used that sum to compare development of human and rat visual cortex and identified a simple linear equation that provides robust alignment of synaptic age between humans and rats. Alignment of synaptic ages is important for age-appropriate targeting and effective translation of neuroplasticity therapies from the lab to the clinic. PMID:25729353
AstroBlend: Visualization package for use with Blender
NASA Astrophysics Data System (ADS)
Naiman, J. P.
2015-12-01
AstroBlend is a visualization package for use in the three dimensional animation and modeling software, Blender. It reads data in via a text file or can use pre-fab isosurface files stored as OBJ or Wavefront files. AstroBlend supports a variety of codes such as FLASH (ascl:1010.082), Enzo (ascl:1010.072), and Athena (ascl:1010.014), and combines artistic 3D models with computational astrophysics datasets to create models and animations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brizzee, K.R.; Ordy, J.M.; Kaack, M.B.
1980-09-01
Five squirrel monkeys were exposed to 200 rads whole-body ionizing irradiation (/sup 60/Co) at 0.4 rads per second on approximately the seventy-fifth day of gestation, and six squirrel monkeys were sham-irradiated. The mean cortical depth and the mean number of neurons per mm/sup 3/ in the visual cortex was less in irradiated animals than in controls, but the differences were not statistically significant. The mean number of glial cells in this cortical region was significantly lower in the irradiated animals. In the hippocampus, the depth of the stratum oriens and the combined depth of the strata radiatum, lacunosum, and molecularemore » were significantly less in irradiated than in control animals. Canonical correlations provided statistical evidence for greater radiation vulnerability of the hippocampus compared to motor and visual areas of the cerebral cortex.« less
Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation
Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.
2014-01-01
Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.
Movement-Based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation
Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R.; Fisher, Robert N.
2014-01-01
Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114
Visual Displays and Contextual Presentations in Computer-Based Instruction.
ERIC Educational Resources Information Center
Park, Ok-choon
1998-01-01
Investigates the effects of two instructional strategies, visual display (animation, and static graphics with and without motion cues) and contextual presentation, in the acquisition of electronic troubleshooting skills using computer-based instruction. Study concludes that use of visual displays and contextual presentation be based on the…
Fitting the Jigsaw of Citation: Information Visualization in Domain Analysis.
ERIC Educational Resources Information Center
Chen, Chaomei; Paul, Ray J.; O'Keefe, Bob
2001-01-01
Discusses the role of information visualization in modeling and representing intellectual structures associated with scientific disciplines and visualizes the domain of computer graphics based on bibliographic data from author cocitation patterns. Highlights include author cocitation maps, citation time lines, animation of a high-dimensional…
The Effects of Verbal Elaboration and Visual Elaboration on Student Learning.
ERIC Educational Resources Information Center
Chanlin, Lih-Juan
1997-01-01
This study examined: (1) the effectiveness of integrating verbal elaboration (metaphors) and different visual presentation strategies (still and animated graphics) in learning biotechnology concepts; (2) whether the use of verbal elaboration with different visual presentation strategies facilitates cognitive processes; and (3) how students employ…
The optic pathway: the development of an eLearning animation.
Cooper, Claire; Erolin, Caroline
2018-04-01
The optic pathway is responsible for sending visual information from the eyes to the brain via electrical impulses. It is essential that a sound understanding of this pathway is established in order to determine an accurate diagnosis concerning visual field defects. Although easy for trained neurologists to understand, it is an area which medical students repeatedly struggle to visualise. It is proposed that audio-visual teaching resources can improve students understanding of complex areas of importance. This article describes the development and evaluation of a short animation created for use in the undergraduate neurology curriculum at the University of Dundee School of Medicine.
Data Cube Visualization with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.; Gárate, Matías
2017-06-01
With the increasing data acquisition rates from observational and computational astrophysics, new tools are needed to study and visualize data. We present a methodology for rendering 3D data cubes using the open-source 3D software Blender. By importing processed observations and numerical simulations through the Voxel Data format, we are able use the Blender interface and Python API to create high-resolution animated visualizations. We review the methods for data import, animation, and camera movement, and present examples of this methodology. The 3D rendering of data cubes gives scientists the ability to create appealing displays that can be used for both scientific presentations as well as public outreach.
Applications of Java and Vector Graphics to Astrophysical Visualization
NASA Astrophysics Data System (ADS)
Edirisinghe, D.; Budiardja, R.; Chae, K.; Edirisinghe, G.; Lingerfelt, E.; Guidry, M.
2002-12-01
We describe a series of projects utilizing the portability of Java programming coupled with the compact nature of vector graphics (SVG and SWF formats) for setup and control of calculations, local and collaborative visualization, and interactive 2D and 3D animation presentations in astrophysics. Through a set of examples, we demonstrate how such an approach can allow efficient and user-friendly control of calculations in compiled languages such as Fortran 90 or C++ through portable graphical interfaces written in Java, and how the output of such calculations can be packaged in vector-based animation having interactive controls and extremely high visual quality, but very low bandwidth requirements.
Visual tracking for multi-modality computer-assisted image guidance
NASA Astrophysics Data System (ADS)
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
Schroeder, David; Keefe, Daniel F
2016-01-01
We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work.
Familiar route loyalty implies visual pilotage in the homing pigeon
Biro, Dora; Meade, Jessica; Guilford, Tim
2004-01-01
Wide-ranging animals, such as birds, regularly traverse large areas of the landscape efficiently in the course of their local movement patterns, which raises fundamental questions about the cognitive mechanisms involved. By using precision global-positioning-system loggers, we show that homing pigeons (Columba livia) not only come to rely on highly stereotyped yet surprisingly inefficient routes within the local area but are attracted directly back to their individually preferred routes even when released from novel sites off-route. This precise route loyalty demonstrates a reliance on familiar landmarks throughout the flight, which was unexpected under current models of avian navigation. We discuss how visual landmarks may be encoded as waypoints within familiar route maps. PMID:15572457
Sounds of silence: How to animate virtual worlds with sound
NASA Technical Reports Server (NTRS)
Astheimer, Peter
1993-01-01
Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.
Night vision: requirements and possible roadmap for FIR and NIR systems
NASA Astrophysics Data System (ADS)
Källhammer, Jan-Erik
2006-04-01
A night vision system must increase visibility in situations where only low beam headlights can be used today. As pedestrians and animals have the highest risk increase in night time traffic due to darkness, the ability of detecting those objects should be the main performance criteria, and the system must remain effective when facing the headlights of oncoming vehicles. Far infrared system has been shown to be superior to near infrared system in terms of pedestrian detection distance. Near infrared images were rated to have significantly higher visual clutter compared with far infrared images. Visual clutter has been shown to correlate with reduction in detection distance of pedestrians. Far infrared images are perceived as being more unusual and therefore more difficult to interpret, although the image appearance is likely related to the lower visual clutter. However, the main issue comparing the two technologies should be how well they solve the driver's problem with insufficient visibility under low beam conditions, especially of pedestrians and other vulnerable road users. With the addition of an automatic detection aid, a main issue will be whether the advantage of FIR systems will vanish given NIR systems with well performing automatic pedestrian detection functionality. The first night vision introductions did not generate the sales volumes initially expected. A renewed interest in night vision systems are however to be expected after the release of night vision systems by BMW, Mercedes and Honda, the latter with automatic pedestrian detection.
ERIC Educational Resources Information Center
Chen, Zhongzhou; Gladding, Gary
2014-01-01
Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor's intuition,…
Endomicroscopy imaging of epithelial structures using tissue autofluorescence
NASA Astrophysics Data System (ADS)
Lin, Bevin; Urayama, Shiro; Saroufeem, Ramez M. G.; Matthews, Dennis L.; Demos, Stavros G.
2011-04-01
We explore autofluorescence endomicroscopy as a potential tool for real-time visualization of epithelial tissue microstructure and organization in a clinical setting. The design parameters are explored using two experimental systems--an Olympus Medical Systems Corp. stand-alone clinical prototype probe, and a custom built bench-top rigid fiber conduit prototype. Both systems entail ultraviolet excitation at 266 nm and/or 325 nm using compact laser sources. Preliminary results using ex vivo animal and human tissue specimens suggest that this technology can be translated toward in vivo application to address the need for real-time histology.
Lymph Node Metastases Optical Molecular Diagnostic and Radiation Therapy
2017-03-01
structures and not molecular functions. The one tool commonly used for metastases imaging is nuclear medicine. Positron emission tomography, PET, is...be visualized at a relevant stage., largely because most imaging is based upon structures and not molecular functions. But there are no tools to...system suitable for imaging signals from in small animals on the standard radiation therapy tools. (3) To evaluate the limits on structural , metabolic
Quantifying camouflage: how to predict detectability from appearance.
Troscianko, Jolyon; Skelhorn, John; Stevens, Martin
2017-01-06
Quantifying the conspicuousness of objects against particular backgrounds is key to understanding the evolution and adaptive value of animal coloration, and in designing effective camouflage. Quantifying detectability can reveal how colour patterns affect survival, how animals' appearances influence habitat preferences, and how receiver visual systems work. Advances in calibrated digital imaging are enabling the capture of objective visual information, but it remains unclear which methods are best for measuring detectability. Numerous descriptions and models of appearance have been used to infer the detectability of animals, but these models are rarely empirically validated or directly compared to one another. We compared the performance of human 'predators' to a bank of contemporary methods for quantifying the appearance of camouflaged prey. Background matching was assessed using several established methods, including sophisticated feature-based pattern analysis, granularity approaches and a range of luminance and contrast difference measures. Disruptive coloration is a further camouflage strategy where high contrast patterns disrupt they prey's tell-tale outline, making it more difficult to detect. Disruptive camouflage has been studied intensely over the past decade, yet defining and measuring it have proven far more problematic. We assessed how well existing disruptive coloration measures predicted capture times. Additionally, we developed a new method for measuring edge disruption based on an understanding of sensory processing and the way in which false edges are thought to interfere with animal outlines. Our novel measure of disruptive coloration was the best predictor of capture times overall, highlighting the importance of false edges in concealment over and above pattern or luminance matching. The efficacy of our new method for measuring disruptive camouflage together with its biological plausibility and computational efficiency represents a substantial advance in our understanding of the measurement, mechanism and definition of disruptive camouflage. Our study also provides the first test of the efficacy of many established methods for quantifying how conspicuous animals are against particular backgrounds. The validation of these methods opens up new lines of investigation surrounding the form and function of different types of camouflage, and may apply more broadly to the evolution of any visual signal.
Butler, Blake E; Chabot, Nicole; Lomber, Stephen G
2016-09-01
The superior colliculus (SC) is a midbrain structure central to orienting behaviors. The organization of descending projections from sensory cortices to the SC has garnered much attention; however, rarely have projections from multiple modalities been quantified and contrasted, allowing for meaningful conclusions within a single species. Here, we examine corticotectal projections from visual, auditory, somatosensory, motor, and limbic cortices via retrograde pathway tracers injected throughout the superficial and deep layers of the cat SC. As anticipated, the majority of cortical inputs to the SC originate in the visual cortex. In fact, each field implicated in visual orienting behavior makes a substantial projection. Conversely, only one area of the auditory orienting system, the auditory field of the anterior ectosylvian sulcus (fAES), and no area involved in somatosensory orienting, shows significant corticotectal inputs. Although small relative to visual inputs, the projection from the fAES is of particular interest, as it represents the only bilateral cortical input to the SC. This detailed, quantitative study allows for comparison across modalities in an animal that serves as a useful model for both auditory and visual perception. Moreover, the differences in patterns of corticotectal projections between modalities inform the ways in which orienting systems are modulated by cortical feedback. J. Comp. Neurol. 524:2623-2642, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Linear and Non-Linear Visual Feature Learning in Rat and Humans
Bossens, Christophe; Op de Beeck, Hans P.
2016-01-01
The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201
Behavioural system identification of visual flight speed control in Drosophila melanogaster
Rohrseitz, Nicola; Fry, Steven N.
2011-01-01
Behavioural control in many animals involves complex mechanisms with intricate sensory-motor feedback loops. Modelling allows functional aspects to be captured without relying on a description of the underlying complex, and often unknown, mechanisms. A wide range of engineering techniques are available for modelling, but their ability to describe time-continuous processes is rarely exploited to describe sensory-motor control mechanisms in biological systems. We performed a system identification of visual flight speed control in the fruitfly Drosophila, based on an extensive dataset of open-loop responses previously measured under free flight conditions. We identified a second-order under-damped control model with just six free parameters that well describes both the transient and steady-state characteristics of the open-loop data. We then used the identified control model to predict flight speed responses after a visual perturbation under closed-loop conditions and validated the model with behavioural measurements performed in free-flying flies under the same closed-loop conditions. Our system identification of the fruitfly's flight speed response uncovers the high-level control strategy of a fundamental flight control reflex without depending on assumptions about the underlying physiological mechanisms. The results are relevant for future investigations of the underlying neuromotor processing mechanisms, as well as for the design of biomimetic robots, such as micro-air vehicles. PMID:20525744
Behavioural system identification of visual flight speed control in Drosophila melanogaster.
Rohrseitz, Nicola; Fry, Steven N
2011-02-06
Behavioural control in many animals involves complex mechanisms with intricate sensory-motor feedback loops. Modelling allows functional aspects to be captured without relying on a description of the underlying complex, and often unknown, mechanisms. A wide range of engineering techniques are available for modelling, but their ability to describe time-continuous processes is rarely exploited to describe sensory-motor control mechanisms in biological systems. We performed a system identification of visual flight speed control in the fruitfly Drosophila, based on an extensive dataset of open-loop responses previously measured under free flight conditions. We identified a second-order under-damped control model with just six free parameters that well describes both the transient and steady-state characteristics of the open-loop data. We then used the identified control model to predict flight speed responses after a visual perturbation under closed-loop conditions and validated the model with behavioural measurements performed in free-flying flies under the same closed-loop conditions. Our system identification of the fruitfly's flight speed response uncovers the high-level control strategy of a fundamental flight control reflex without depending on assumptions about the underlying physiological mechanisms. The results are relevant for future investigations of the underlying neuromotor processing mechanisms, as well as for the design of biomimetic robots, such as micro-air vehicles.
Evaluation of a GPS used in conjunction with aerial telemetry
Olexa, E.M.; Gogan, P.J.P.; Podruzny, K.M.; Eiler, John; Alcorn, Doris J.; Neuman, Michael R.
2001-01-01
We investigated the use of a non-correctable Global Positioning System (NGPS) in association with aerial telemetry to determine animal locations. Average error was determined for 3 components of the location process: use of a NGPS receiver on the ground, use of a NGPS receiver in a aircraft while flying over a visual marker, and use of the same receiver while flying over a location determined by standard aerial telemetry. Average errors were 45.3, 88.1 and 137.4 m, respectively. A directional bias of <35 m was present for the telemetry component only. Tests indicated that use of NGPS to determine aircraft, and thereby animal, location is an efficient alternative to interpolation from topographic maps. This method was more accurate than previously reported Long-Range Navigation system, version C (LORAN-C) and Argos satellite telemetry. It has utility in areas where animal-borne GPS receivers are not practical due to a combination of topography, canopy coverage, weight or cost of animal-borne GPS units. Use of NGPS technology in conjunction with aerial telemetry will provide the location accuracy required for identification of gross movement patterns and coarse-grained habitat use.
Home use of binocular dichoptic video content device for treatment of amblyopia: a pilot study.
Mezad-Koursh, Daphna; Rosenblatt, Amir; Newman, Hadas; Stolovitch, Chaim
2018-04-01
To evaluate the efficacy of the BinoVision home system as measured by improvement of visual acuity in the patient's amblyopic eye. An open-label prospective pilot-trial of the system was conducted with amblyopic children aged 4-8 years at the pediatric ophthalmology unit, Tel-Aviv Medical Center, January 2014 to October 2015. Participants were assigned to the study or sham group for treatment with BinoVision for 8 or 12 weeks. Patients were instructed to watch animated television shows and videos at home using the BinoVision device for 60 minutes, 6 days a week. The BinoVision program incorporates elements at different contrast and brightness levels for both eyes, weak eye tracking training by superimposed screen images, and weak eye flicker stimuli with alerting sound manipulations. Patients were examined at 4, 8, 12, 24, and 36 weeks. A total of 27 children were recruited (14 boys), with 19 in the treatment group. Median age was 5 years (range, 4-8 years). Mean visual acuity improved by 0.26 logMAR lines in the treatment group from baseline to 12 weeks. Visual acuity was improved compared to baseline during all study and follow-up appointments (P < 0.01), with stabilization of visual acuity after cessation of treatment. The sham group completed 4 weeks of sham protocol with no change in visual acuity (P = 0.285). The average compliance rate was 88% ± 16% (50% to 100%) in treatment group. This pilot trial of 12 weeks of amblyopia treatment with the BinoVision home system demonstrated significant improvement in patients' visual acuity. Copyright © 2018 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.
Nguyen, Quoc-Thang; Matute, Carlos; Miledi, Ricardo
1998-01-01
It has been postulated that, in the adult visual cortex, visual inputs modulate levels of mRNAs coding for neurotransmitter receptors in an activity-dependent manner. To investigate this possibility, we performed a monocular enucleation in adult rabbits and, 15 days later, collected their left and right visual cortices. Levels of mRNAs coding for voltage-activated sodium channels, and for receptors for kainate/α-amino-3-hydroxy-5-methylisoxazole-4-propionic acid (AMPA), N-methyl-d-aspartate (NMDA), γ-aminobutyric acid (GABA), and glycine were semiquantitatively estimated in the visual cortices ipsilateral and contralateral to the lesion by the Xenopus oocyte/voltage-clamp expression system. This technique also allowed us to study some of the pharmacological and physiological properties of the channels and receptors expressed in the oocytes. In cells injected with mRNA from left or right cortices of monocularly enucleated and control animals, the amplitudes of currents elicited by kainate or AMPA, which reflect the abundance of mRNAs coding for kainate and AMPA receptors, were similar. There was no difference in the sensitivity to kainate and in the voltage dependence of the kainate response. Responses mediated by NMDA, GABA, and glycine were unaffected by monocular enucleation. Sodium channel peak currents, activation, steady-state inactivation, and sensitivity to tetrodotoxin also remained unchanged after the enucleation. Our data show that mRNAs for major neurotransmitter receptors and ion channels in the adult rabbit visual cortex are not obviously modified by monocular deafferentiation. Thus, our results do not support the idea of a widespread dynamic modulation of mRNAs coding for receptors and ion channels by visual activity in the rabbit visual system. PMID:9501250
Kraft, Andrew W.; Mitra, Anish; Bauer, Adam Q.; Raichle, Marcus E.; Culver, Joseph P.; Lee, Jin-Moo
2017-01-01
Decades of work in experimental animals has established the importance of visual experience during critical periods for the development of normal sensory-evoked responses in the visual cortex. However, much less is known concerning the impact of early visual experience on the systems-level organization of spontaneous activity. Human resting-state fMRI has revealed that infraslow fluctuations in spontaneous activity are organized into stereotyped spatiotemporal patterns across the entire brain. Furthermore, the organization of spontaneous infraslow activity (ISA) is plastic in that it can be modulated by learning and experience, suggesting heightened sensitivity to change during critical periods. Here we used wide-field optical intrinsic signal imaging in mice to examine whole-cortex spontaneous ISA patterns. Using monocular or binocular visual deprivation, we examined the effects of critical period visual experience on the development of ISA correlation and latency patterns within and across cortical resting-state networks. Visual modification with monocular lid suturing reduced correlation between left and right cortices (homotopic correlation) within the visual network, but had little effect on internetwork correlation. In contrast, visual deprivation with binocular lid suturing resulted in increased visual homotopic correlation and increased anti-correlation between the visual network and several extravisual networks, suggesting cross-modal plasticity. These network-level changes were markedly attenuated in mice with genetic deletion of Arc, a gene known to be critical for activity-dependent synaptic plasticity. Taken together, our results suggest that critical period visual experience induces global changes in spontaneous ISA relationships, both within the visual network and across networks, through an Arc-dependent mechanism. PMID:29087327
Modeling liver physiology: combining fractals, imaging and animation.
Lin, Debbie W; Johnson, Scott; Hunt, C Anthony
2004-01-01
Physiological modeling of vascular and microvascular networks in several key human organ systems is critical for a deeper understanding of pharmacology and the effect of pharmacotherapies on disease. Like the lung and the kidney, the morphology of its vascular and microvascular system plays a major role in its functional capability. To understand liver function in absorption and metabolism of food and drugs, one must examine the morphology and physiology at both higher and lower level liver function. We have developed validated virtualized dynamic three dimensional (3D) models of liver secondary units and primary units by combining a number of different methods: three-dimensional rendering, fractals, and animation. We have simulated particle dynamics in the liver secondary unit. The resulting models are suitable for use in helping researchers easily visualize and gain intuition on results of in silico liver experiments.
[Comparative analysis of light sensitivity, depth and motion perception in animals and humans].
Schaeffel, F
2017-11-01
This study examined how humans perform regarding light sensitivity, depth perception and motion vision in comparison to various animals. The parameters that limit the performance of the visual system for these different functions were examined. This study was based on literature studies (search in PubMed) and own results. Light sensitivity is limited by the brightness of the retinal image, which in turn is determined by the f‑number of the eye. Furthermore, it is limited by photon noise, thermal decay of rhodopsin, noise in the phototransduction cascade and neuronal processing. In invertebrates, impressive optical tricks have been developed to increase the number of photons reaching the photoreceptors. Furthermore, the spontaneous decay of the photopigment is lower in invertebrates at the cost of higher energy consumption. For depth perception at close range, stereopsis is the most precise but is available only to a few vertebrates. In contrast, motion parallax is used by many species including vertebrates as well as invertebrates. In a few cases accommodation is used for depth measurements or chromatic aberration. In motion vision the temporal resolution of the eye is most important. The ficker fusion frequency correlates in vertebrates with metabolic turnover and body temperature but also has very high values in insects. Apart from that the flicker fusion frequency generally declines with increasing body weight. Compared to animals the performance of the visual system in humans is among the best regarding light sensitivity, is the best regarding depth resolution and in the middle range regarding motion resolution.
Perceived Synchrony of Frog Multimodal Signal Components Is Influenced by Content and Order.
Taylor, Ryan C; Page, Rachel A; Klein, Barrett A; Ryan, Michael J; Hunter, Kimberly L
2017-10-01
Multimodal signaling is common in communication systems. Depending on the species, individual signal components may be produced synchronously as a result of physiological constraint (fixed) or each component may be produced independently (fluid) in time. For animals that rely on fixed signals, a basic prediction is that asynchrony between the components should degrade the perception of signal salience, reducing receiver response. Male túngara frogs, Physalaemus pustulosus, produce a fixed multisensory courtship signal by vocalizing with two call components (whines and chucks) and inflating a vocal sac (visual component). Using a robotic frog, we tested female responses to variation in the temporal arrangement between acoustic and visual components. When the visual component lagged a complex call (whine + chuck), females largely rejected this asynchronous multisensory signal in favor of the complex call absent the visual cue. When the chuck component was removed from one call, but the robofrog inflation lagged the complex call, females responded strongly to the asynchronous multimodal signal. When the chuck component was removed from both calls, females reversed preference and responded positively to the asynchronous multisensory signal. When the visual component preceded the call, females responded as often to the multimodal signal as to the call alone. These data show that asynchrony of a normally fixed signal does reduce receiver responsiveness. The magnitude and overall response, however, depend on specific temporal interactions between the acoustic and visual components. The sensitivity of túngara frogs to lagging visual cues, but not leading ones, and the influence of acoustic signal content on the perception of visual asynchrony is similar to those reported in human psychophysics literature. Virtually all acoustically communicating animals must conduct auditory scene analyses and identify the source of signals. Our data suggest that some basic audiovisual neural integration processes may be at work in the vertebrate brain. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology 2017. This work is written by US Government employees and is in the public domain in the US.
NASA Technical Reports Server (NTRS)
Bridgman, William T.; Shirah, Greg W.; Mitchell, Horace G.
2008-01-01
Today, scientific data and models can combine with modern animation tools to produce compelling visualizations to inform and educate. The Scientific Visualization Studio at Goddard Space Flight Center merges these techniques from the very different worlds of entertainment and science to enable scientists and the general public to 'see the unseeable' in new ways.
NASA Technical Reports Server (NTRS)
Shores, David; Goza, Sharon P.; McKeegan, Cheyenne; Easley, Rick; Way, Janet; Everett, Shonn; Guerra, Mark; Kraesig, Ray; Leu, William
2013-01-01
Enigma Version 12 software combines model building, animation, and engineering visualization into one concise software package. Enigma employs a versatile user interface to allow average users access to even the most complex pieces of the application. Using Enigma eliminates the need to buy and learn several software packages to create an engineering visualization. Models can be created and/or modified within Enigma down to the polygon level. Textures and materials can be applied for additional realism. Within Enigma, these models can be combined to create systems of models that have a hierarchical relationship to one another, such as a robotic arm. Then these systems can be animated within the program or controlled by an external application programming interface (API). In addition, Enigma provides the ability to use plug-ins. Plugins allow the user to create custom code for a specific application and access the Enigma model and system data, but still use the Enigma drawing functionality. CAD files can be imported into Enigma and combined to create systems of computer graphics models that can be manipulated with constraints. An API is available so that an engineer can write a simulation and drive the computer graphics models with no knowledge of computer graphics. An animation editor allows an engineer to set up sequences of animations generated by simulations or by conceptual trajectories in order to record these to highquality media for presentation. Enigma Version 12 Lyndon B. Johnson Space Center, Houston, Texas 28 NASA Tech Briefs, September 2013 Planetary Protection Bioburden Analysis Program NASA's Jet Propulsion Laboratory, Pasadena, California This program is a Microsoft Access program that performed statistical analysis of the colony counts from assays performed on the Mars Science Laboratory (MSL) spacecraft to determine the bioburden density, 3-sigma biodensity, and the total bioburdens required for the MSL prelaunch reports. It also contains numerous tools that report the data in various ways to simplify the reports required. The program performs all the calculations directly in the MS Access program. Prior to this development, the data was exported to large Excel files that had to be cut and pasted to provide the desired results. The program contains a main menu and a number of submenus. Analyses can be performed by using either all the assays, or only the accountable assays that will be used in the final analysis. There are three options on the first menu: either calculate using (1) the old MER (Mars Exploration Rover) statistics, (2) the MSL statistics for all the assays, or This software implements penetration limit equations for common micrometeoroid and orbital debris (MMOD) shield configurations, windows, and thermal protection systems. Allowable MMOD risk is formulated in terms of the probability of penetration (PNP) of the spacecraft pressure hull. For calculating the risk, spacecraft geometry models, mission profiles, debris environment models, and penetration limit equations for installed shielding configurations are required. Risk assessment software such as NASA's BUMPERII is used to calculate mission PNP; however, they are unsuitable for use in shield design and preliminary analysis studies. The software defines a single equation for the design and performance evaluation of common MMOD shielding configurations, windows, and thermal protection systems, along with a description of their validity range and guidelines for their application. Recommendations are based on preliminary reviews of fundamental assumptions, and accuracy in predicting experimental impact test results. The software is programmed in Visual Basic for Applications for installation as a simple add-in for Microsoft Excel. The user is directed to a graphical user interface (GUI) that requires user inputs and provides solutions directly in Microsoft Excel workbooks. This work was done by Shannon Ryan of the USRA Lunar and Planetary Institute for Johnson Space Center. Further information is contained in a TSP (see page 1). MSC- 24582-1 Micrometeoroid and Orbital Debris (MMOD) Shield Ballistic Limit Analysis Program Lyndon B. Johnson Space Center, Houston, Texas Commercially, because it is so generic, Enigma can be used for almost any project that requires engineering visualization, model building, or animation. Models in Enigma can be exported to many other formats for use in other applications as well. Educationally, Enigma is being used to allow university students to visualize robotic algorithms in a simulation mode before using them with actual hardware.
Ho, Leon C.; Wang, Bo; Conner, Ian P.; van der Merwe, Yolandi; Bilonick, Richard A.; Kim, Seong-Gi; Wu, Ed X.; Sigal, Ian A.; Wollstein, Gadi; Schuman, Joel S.; Chan, Kevin C.
2015-01-01
Purpose. Excitotoxicity has been linked to the pathogenesis of ocular diseases and injuries and may involve early degeneration of both anterior and posterior visual pathways. However, their spatiotemporal relationships remain unclear. We hypothesized that the effects of excitotoxic retinal injury (ERI) on the visual system can be revealed in vivo by diffusion tensor magnetic resonance imagining (DTI), manganese-enhanced magnetic resonance imagining (MRI), and optical coherence tomography (OCT). Methods. Diffusion tensor MRI was performed at 9.4 Tesla to monitor white matter integrity changes after unilateral N-methyl-D-aspartate (NMDA)-induced ERI in six Sprague-Dawley rats and six C57BL/6J mice. Additionally, four rats and four mice were intravitreally injected with saline to compare with NMDA-injected animals. Optical coherence tomography of the retina and manganese-enhanced MRI of anterograde transport were evaluated and correlated with DTI parameters. Results. In the rat optic nerve, the largest axial diffusivity decrease and radial diffusivity increase occurred within the first 3 and 7 days post ERI, respectively, suggestive of early axonal degeneration and delayed demyelination. The optic tract showed smaller directional diffusivity changes and weaker DTI correlations with retinal thickness compared with optic nerve, indicative of anterograde degeneration. The splenium of corpus callosum was also reorganized at 4 weeks post ERI. The DTI profiles appeared comparable between rat and mouse models. Furthermore, the NMDA-injured visual pathway showed reduced anterograde manganese transport, which correlated with diffusivity changes along but not perpendicular to optic nerve. Conclusions. Diffusion tensor MRI, manganese-enhanced MRI, and OCT provided an in vivo model system for characterizing the spatiotemporal changes in white matter integrity, the eye–brain relationships and structural–physiological relationships in the visual system after ERI. PMID:26066747
Flies and humans share a motion estimation strategy that exploits natural scene statistics
Clark, Damon A.; Fitzgerald, James E.; Ales, Justin M.; Gohl, Daryl M.; Silies, Marion A.; Norcia, Anthony M.; Clandinin, Thomas R.
2014-01-01
Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. Here we show that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extract triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations is retained even as light and dark edge motion signals are combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This striking convergence argues that statistical structures in natural scenes have profoundly affected visual processing, driving a common computational strategy over 500 million years of evolution. PMID:24390225
The sensory basis of rheotaxis in turbulent flow
NASA Astrophysics Data System (ADS)
Elder, John P.
Rheotaxis is a robust, multisensory behavior with many potential benefits for fish and other aquatic animals, yet the influence of different fluvial conditions on rheotactic performance and its sensory basis is still poorly understood. Here, we examine the role that vision and the lateral line play in the rheotactic behavior of a stream-dwelling species (Mexican tetra, Astyanax mexicanus) under both rectilinear and turbulent flow conditions. Turbulence enhanced overall rheotactic strength and lowered the flow speed at which rheotaxis was initiated; this effect did not depend on the availability of either visual or lateral line information. Compared to fish without access to visual information, fish with access to visual information exhibited increased levels of positional stability and as a result, increased levels of rheotactic accuracy. No disruption in rheotactic performance was found when the lateral line was disabled, suggesting that this sensory system is not necessary for either rheotaxis or turbulence detection under the conditions of this study.
Welge, Weston A.; Barton, Jennifer K.
2015-01-01
Optical coherence tomography (OCT) is a useful imaging modality for detecting and monitoring diseases of the gastrointestinal tract and other tubular structures. The non-destructiveness of OCT enables time-serial studies in animal models. While turnkey commercial research OCT systems are plenty, researchers often require custom imaging probes. We describe the integration of a custom endoscope with a commercial swept-source OCT system and generalize this description to any imaging probe and OCT system. A numerical dispersion compensation method is also described. Example images demonstrate that OCT can visualize the mouse colon crypt structure and detect adenoma in vivo. PMID:26418811
Goodale, M A; Murison, R C
1975-05-02
The effects of bilateral removal of the superior colliculus or visual cortex on visually guided locomotor movements in rats performing a brightness discrimination task were investigated directly with the use of cine film. Rats with collicular lesions showed patterns of locomotion comparable to or more efficient than those of normal animals when approaching one of 5 small doors located at one end of a large open area. In contrast, animals with large but incomplete lesions of visual cortex were distinctly impaired in their visual control of approach responses to the same stimuli. On the other hand, rats with collicular damage showed no orienting reflex or evidence of distraction in the same task when novel visual or auditory stimuli were presented. However, both normal and visual-decorticate rats showed various components of the orienting reflex and disturbance in task performance when the same novel stimuli were presented. These results suggest that although the superior colliculus does not appear to be essential to the visual control of locomotor orientation, this midbrain structure might participate in the mediation of shifts in visual fixation and attention. Visual cortex, while contributing to visuospatial guidance of locomotor movements, might not play a significant role in the control and integration of the orienting reflex.
Environmental and genetic effects on weight and visual score traits at postweaning in Suffolk sheep.
Nascimento, Bárbara Mazetti; Somavilla, Adriana Luiza; Dias, Laila Talarico; Teixeira, Rodrigo de Almeida
2014-10-01
This study aimed to investigate the following environmental effects in Suffolk lambing: contemporary groups, type of birth, and age of animal and age of dam at lambing on conformation (C), precocity (P), musculature (M), and body weight at postweaning (W), and the heritability coefficients and genetic correlations among these traits. Contemporary groups, type of birth, and age of animal and age of dam at lambing were significant for W. For C, all the effects studied were significant, except linear and quadratic effects of age of the animal. For P, all effects studied were significant, except the quadratic effect of age of the animal. For M, the effects of contemporary group, type of birth, and the linear effect of the age of the animal were significant. Heritability estimates were 0.07 ± 0.03, 0.14 ± 0.03, 0.09 ± 0.03, and 0.11 ± 0.03 for C, P, M, and W, respectively, indicating a positive low response for direct selection. Estimates of genetic correlations among the visual scores (C, P, and M) and W were moderate to highly favorable and positive, ranging from 0.48 to 0.90. These results indicate that selection for visual scores will increase body weight.
A novel mechanism for mechanosensory-based rheotaxis in larval zebrafish.
Oteiza, Pablo; Odstrcil, Iris; Lauder, George; Portugues, Ruben; Engert, Florian
2017-07-27
When flying or swimming, animals must adjust their own movement to compensate for displacements induced by the flow of the surrounding air or water. These flow-induced displacements can most easily be detected as visual whole-field motion with respect to the animal's frame of reference. Despite this, many aquatic animals consistently orient and swim against oncoming flows (a behaviour known as rheotaxis) even in the absence of visual cues. How animals achieve this task, and its underlying sensory basis, is still unknown. Here we show that, in the absence of visual information, larval zebrafish (Danio rerio) perform rheotaxis by using flow velocity gradients as navigational cues. We present behavioural data that support a novel algorithm based on such local velocity gradients that fish use to avoid getting dragged by flowing water. Specifically, we show that fish use their mechanosensory lateral line to first sense the curl (or vorticity) of the local velocity vector field to detect the presence of flow and, second, to measure its temporal change after swim bouts to deduce flow direction. These results reveal an elegant navigational strategy based on the sensing of flow velocity gradients and provide a comprehensive behavioural algorithm, also applicable for robotic design, that generalizes to a wide range of animal behaviours in moving fluids.
Visual discrimination transfer and modulation by biogenic amines in honeybees.
Vieira, Amanda Rodrigues; Salles, Nayara; Borges, Marco; Mota, Theo
2018-05-10
For more than a century, visual learning and memory have been studied in the honeybee Apis mellifera using operant appetitive conditioning. Although honeybees show impressive visual learning capacities in this well-established protocol, operant training of free-flying animals cannot be combined with invasive protocols for studying the neurobiological basis of visual learning. In view of this, different attempts have been made to develop new classical conditioning protocols for studying visual learning in harnessed honeybees, though learning performance remains considerably poorer than that for free-flying animals. Here, we investigated the ability of honeybees to use visual information acquired during classical conditioning in a new operant context. We performed differential visual conditioning of the proboscis extension reflex (PER) followed by visual orientation tests in a Y-maze. Classical conditioning and Y-maze retention tests were performed using the same pair of perceptually isoluminant chromatic stimuli, to avoid the influence of phototaxis during free-flying orientation. Visual discrimination transfer was clearly observed, with pre-trained honeybees significantly orienting their flights towards the former positive conditioned stimulus (CS+), thus showing that visual memories acquired by honeybees are resistant to context changes between conditioning and the retention test. We combined this visual discrimination approach with selective pharmacological injections to evaluate the effect of dopamine and octopamine in appetitive visual learning. Both octopaminergic and dopaminergic antagonists impaired visual discrimination performance, suggesting that both these biogenic amines modulate appetitive visual learning in honeybees. Our study brings new insight into cognitive and neurobiological mechanisms underlying visual learning in honeybees. © 2018. Published by The Company of Biologists Ltd.
NASA Astrophysics Data System (ADS)
Cheok, Adrian David
Poultry are one of the most badly treated animals in the modern world. It has been shown that they have high levels of both cognition and feelings, and as a result there has been a recent trend of promoting poultry welfare. There is also a tradition of keeping poultry as pets in some parts of the world. However, in modern cities and societies, it is often difficult to maintain contact with pets, particularly for office workers. We propose and describe a novel cybernetics system to use mobile and Internet technology to improve human-pet interaction. It can also be used for people who are allergic to touching animals and thus cannot stroke them directly. This interaction encompasses both visualization and tactile sensation of real objects.
Yellepeddi, Venkata Kashyap; Roberson, Charles
2016-10-25
Objective. To evaluate the impact of animated videos of oral solid dosage form manufacturing as visual instructional aids on pharmacy students' perception and learning. Design. Data were obtained using a validated, paper-based survey instrument designed to evaluate the effectiveness, appeal, and efficiency of the animated videos in a pharmaceutics course offered in spring 2014 and 2015. Basic demographic data were also collected and analyzed. Assessment data at the end of pharmaceutics course was collected for 2013 and compared with assessment data from 2014, and 2015. Assessment. Seventy-six percent of the respondents supported the idea of incorporating animated videos as instructional aids for teaching pharmaceutics. Students' performance on the formative assessment in 2014 and 2015 improved significantly compared to the performance of students in 2013 whose lectures did not include animated videos as instructional aids. Conclusions. Implementing animated videos of oral solid dosage form manufacturing as instructional aids resulted in improved student learning and favorable student perceptions about the instructional approach. Therefore, use of animated videos can be incorporated in pharmaceutics teaching to enhance visual learning.
Roberson, Charles
2016-01-01
Objective. To evaluate the impact of animated videos of oral solid dosage form manufacturing as visual instructional aids on pharmacy students’ perception and learning. Design. Data were obtained using a validated, paper-based survey instrument designed to evaluate the effectiveness, appeal, and efficiency of the animated videos in a pharmaceutics course offered in spring 2014 and 2015. Basic demographic data were also collected and analyzed. Assessment data at the end of pharmaceutics course was collected for 2013 and compared with assessment data from 2014, and 2015. Assessment. Seventy-six percent of the respondents supported the idea of incorporating animated videos as instructional aids for teaching pharmaceutics. Students’ performance on the formative assessment in 2014 and 2015 improved significantly compared to the performance of students in 2013 whose lectures did not include animated videos as instructional aids. Conclusions. Implementing animated videos of oral solid dosage form manufacturing as instructional aids resulted in improved student learning and favorable student perceptions about the instructional approach. Therefore, use of animated videos can be incorporated in pharmaceutics teaching to enhance visual learning. PMID:27899837
Quantitative simulation of extraterrestrial engineering devices
NASA Technical Reports Server (NTRS)
Arabyan, A.; Nikravesh, P. E.; Vincent, T. L.
1991-01-01
This is a multicomponent, multidisciplinary project whose overall objective is to build an integrated database, simulation, visualization, and optimization system for the proposed oxygen manufacturing plant on Mars. Specifically, the system allows users to enter physical description, engineering, and connectivity data through a uniform, user-friendly interface and stores the data in formats compatible with other software also developed as part of this project. These latter components include: (1) programs to simulate the behavior of various parts of the plant in Martian conditions; (2) an animation program which, in different modes, provides visual feedback to designers and researchers about the location of and temperature distribution among components as well as heat, mass, and data flow through the plant as it operates in different scenarios; (3) a control program to investigate the stability and response of the system under different disturbance conditions; and (4) an optimization program to maximize or minimize various criteria as the system evolves into its final design. All components of the system are interconnected so that changes entered through one component are reflected in the others.
Gene Therapy for Color Blindness.
Hassall, Mark M; Barnard, Alun R; MacLaren, Robert E
2017-12-01
Achromatopsia is a rare congenital cause of vision loss due to isolated cone photoreceptor dysfunction. The most common underlying genetic mutations are autosomal recessive changes in CNGA3 , CNGB3 , GNAT2 , PDE6H , PDE6C , or ATF6 . Animal models of Cnga3 , Cngb3 , and Gnat2 have been rescued using AAV gene therapy; showing partial restoration of cone electrophysiology and integration of this new photopic vision in reflexive and behavioral visual tests. Three gene therapy phase I/II trials are currently being conducted in human patients in the USA, the UK, and Germany. This review details the AAV gene therapy treatments of achromatopsia to date. We also present novel data showing rescue of a Cnga3 -/- mouse model using an rAAV.CBA.CNGA3 vector. We conclude by synthesizing the implications of this animal work for ongoing human trials, particularly, the challenge of restoring integrated cone retinofugal pathways in an adult visual system. The evidence to date suggests that gene therapy for achromatopsia will need to be applied early in childhood to be effective.
Y0: An innovative tool for spatial data analysis
NASA Astrophysics Data System (ADS)
Wilson, Jeremy C.
1993-08-01
This paper describes an advanced analysis and visualization tool, called Y0 (pronounced ``Why not?!''), that has been developed to directly support the scientific process for earth and space science research. Y0 aids the scientific research process by enabling the user to formulate algorithms and models within an integrated environment, and then interactively explore the solution space with the aid of appropriate visualizations. Y0 has been designed to provide strong support for both quantitative analysis and rich visualization. The user's algorithm or model is defined in terms of algebraic formulas in cells on worksheets, in a similar fashion to spreadsheet programs. Y0 is specifically designed to provide the data types and rich function set necessary for effective analysis and manipulation of remote sensing data. This includes various types of arrays, geometric objects, and objects for representing geographic coordinate system mappings. Visualization of results is tailored to the needs of remote sensing, with straightforward methods of composing, comparing, and animating imagery and graphical information, with reference to geographical coordinate systems. Y0 is based on advanced object-oriented technology. It is implemented in C++ for use in Unix environments, with a user interface based on the X window system. Y0 has been delivered under contract to Unidata, a group which provides data and software support to atmospheric researches in universities affiliated with UCAR. This paper will explore the key concepts in Y0, describe its utility for remote sensing analysis and visualization, and will give a specific example of its application to the problem of measuring glacier flow rates from Landsat imagery.
Dyscalculia and the Calculating Brain.
Rapin, Isabelle
2016-08-01
Dyscalculia, like dyslexia, affects some 5% of school-age children but has received much less investigative attention. In two thirds of affected children, dyscalculia is associated with another developmental disorder like dyslexia, attention-deficit disorder, anxiety disorder, visual and spatial disorder, or cultural deprivation. Infants, primates, some birds, and other animals are born with the innate ability, called subitizing, to tell at a glance whether small sets of scattered dots or other items differ by one or more item. This nonverbal approximate number system extends mostly to single digit sets as visual discrimination drops logarithmically to "many" with increasing numerosity (size effect) and crowding (distance effect). Preschoolers need several years and specific teaching to learn verbal names and visual symbols for numbers and school agers to understand their cardinality and ordinality and the invariance of their sequence (arithmetic number line) that enables calculation. This arithmetic linear line differs drastically from the nonlinear approximate number system mental number line that parallels the individual number-tuned neurons in the intraparietal sulcus in monkeys and overlying scalp distribution of discrete functional magnetic resonance imaging activations by number tasks in man. Calculation is a complex skill that activates both visual and spatial and visual and verbal networks. It is less strongly left lateralized than language, with approximate number system activation somewhat more right sided and exact number and arithmetic activation more left sided. Maturation and increasing number skill decrease associated widespread non-numerical brain activations that persist in some individuals with dyscalculia, which has no single, universal neurological cause or underlying mechanism in all affected individuals. Copyright © 2016 Elsevier Inc. All rights reserved.
A New Java Animation in Peer-Reviewed "JCE" Webware
ERIC Educational Resources Information Center
Coleman, William F.; Fedosky, Edward W.
2006-01-01
"Computer Simulations of Salt Solubility" by Victor M. S. Gil provides an animated, visual interpretation of the different solubilities of related salts based on simple entropy changes associated with dissolution such as configurational disorder and thermal disorder. This animation can help improve students' conceptual understanding of…
3D Scientific Visualization with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2015-03-01
This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.
Thermal consequences of colour and near-infrared reflectance.
Stuart-Fox, Devi; Newton, Elizabeth; Clusella-Trullas, Susana
2017-07-05
The importance of colour for temperature regulation in animals remains controversial. Colour can affect an animal's temperature because all else being equal, dark surfaces absorb more solar energy than do light surfaces, and that energy is converted into heat. However, in reality, the relationship between colour and thermoregulation is complex and varied because it depends on environmental conditions and the physical properties, behaviour and physiology of the animal. Furthermore, the thermal effects of colour depend as much on absorptance of near-infrared ((NIR), 700-2500 nm) as visible (300-700 nm) wavelengths of direct sunlight; yet the NIR is very rarely considered or measured. The few available data on NIR reflectance in animals indicate that the visible reflectance is often a poor predictor of NIR reflectance. Adaptive variation in animal coloration (visible reflectance) reflects a compromise between multiple competing functions such as camouflage, signalling and thermoregulation. By contrast, adaptive variation in NIR reflectance should primarily reflect thermoregulatory requirements because animal visual systems are generally insensitive to NIR wavelengths. Here, we assess evidence and identify key research questions regarding the thermoregulatory function of animal coloration, and specifically consider evidence for adaptive variation in NIR reflectance.This article is part of the themed issue 'Animal coloration: production, perception, function and application'. © 2017 The Author(s).
Basic math in monkeys and college students.
Cantlon, Jessica F; Brannon, Elizabeth M
2007-12-01
Adult humans possess a sophisticated repertoire of mathematical faculties. Many of these capacities are rooted in symbolic language and are therefore unlikely to be shared with nonhuman animals. However, a subset of these skills is shared with other animals, and this set is considered a cognitive vestige of our common evolutionary history. Current evidence indicates that humans and nonhuman animals share a core set of abilities for representing and comparing approximate numerosities nonverbally; however, it remains unclear whether nonhuman animals can perform approximate mental arithmetic. Here we show that monkeys can mentally add the numerical values of two sets of objects and choose a visual array that roughly corresponds to the arithmetic sum of these two sets. Furthermore, monkeys' performance during these calculations adheres to the same pattern as humans tested on the same nonverbal addition task. Our data demonstrate that nonverbal arithmetic is not unique to humans but is instead part of an evolutionarily primitive system for mathematical thinking shared by monkeys.
Noninvasive imaging of protein-protein interactions in living animals
NASA Astrophysics Data System (ADS)
Luker, Gary D.; Sharma, Vijay; Pica, Christina M.; Dahlheimer, Julie L.; Li, Wei; Ochesky, Joseph; Ryan, Christine E.; Piwnica-Worms, Helen; Piwnica-Worms, David
2002-05-01
Protein-protein interactions control transcription, cell division, and cell proliferation as well as mediate signal transduction, oncogenic transformation, and regulation of cell death. Although a variety of methods have been used to investigate protein interactions in vitro and in cultured cells, none can analyze these interactions in intact, living animals. To enable noninvasive molecular imaging of protein-protein interactions in vivo by positron-emission tomography and fluorescence imaging, we engineered a fusion reporter gene comprising a mutant herpes simplex virus 1 thymidine kinase and green fluorescent protein for readout of a tetracycline-inducible, two-hybrid system in vivo. By using micro-positron-emission tomography, interactions between p53 tumor suppressor and the large T antigen of simian virus 40 were visualized in tumor xenografts of HeLa cells stably transfected with the imaging constructs. Imaging protein-binding partners in vivo will enable functional proteomics in whole animals and provide a tool for screening compounds targeted to specific protein-protein interactions in living animals.
Role of olfaction in Octopus vulgaris reproduction.
Polese, Gianluca; Bertapelle, Carla; Di Cosmo, Anna
2015-01-01
The olfactory system in any animal is the primary sensory system that responds to chemical stimuli emanating from a distant source. In aquatic animals "Odours" are molecules in solution that guide them to locate food, partners, nesting sites, and dangers to avoid. Fish, crustaceans and aquatic molluscs possess sensory systems that have anatomical similarities to the olfactory systems of land-based animals. Molluscs are a large group of aquatic and terrestrial animals that rely heavily on chemical communication with a generally dispersed sense of touch and chemical sensitivity. Cephalopods, the smallest class among extant marine molluscs, are predators with high visual capability and well developed vestibular, auditory, and tactile systems. Nevertheless they possess a well developed olfactory organ, but to date almost nothing is known about the mechanisms, functions and modulation of this chemosensory structure in octopods. Cephalopod brains are the largest of all invertebrate brains and across molluscs show the highest degree of centralization. The reproductive behaviour of Octopus vulgaris is under the control of a complex set of signal molecules such as neuropeptides, neurotransmitters and sex steroids that guide the behaviour from the level of individuals in evaluating mates, to stimulating or deterring copulation, to sperm-egg chemical signalling that promotes fertilization. These signals are intercepted by the olfactory organs and integrated in the olfactory lobes in the central nervous system. In this context we propose a model in which the olfactory organ and the olfactory lobe of O. vulgaris could represent the on-off switch between food intake and reproduction. Copyright © 2014 Elsevier Inc. All rights reserved.
Gravett, Matthew; Cepek, Jeremy; Fenster, Aaron
2017-11-01
The purpose of this study was to develop and validate an image-guided robotic needle delivery system for accurate and repeatable needle targeting procedures in mouse brains inside the 12 cm inner diameter gradient coil insert of a 9.4 T MR scanner. Many preclinical research techniques require the use of accurate needle deliveries to soft tissues, including brain tissue. Soft tissues are optimally visualized in MR images, which offer high-soft tissue contrast, as well as a range of unique imaging techniques, including functional, spectroscopy and thermal imaging, however, there are currently no solutions for delivering needles to small animal brains inside the bore of an ultra-high field MR scanner. This paper describes the mechatronic design, evaluation of MR compatibility, registration technique, mechanical calibration, the quantitative validation of the in-bore image-guided needle targeting accuracy and repeatability, and demonstrated the system's ability to deliver needles in situ. Our six degree-of-freedom, MR compatible, mechatronic system was designed to fit inside the bore of a 9.4 T MR scanner and is actuated using a combination of piezoelectric and hydraulic mechanisms. The MR compatibility and targeting accuracy of the needle delivery system are evaluated to ensure that the system is precisely calibrated to perform the needle targeting procedures. A semi-automated image registration is performed to link the robot coordinates to the MR coordinate system. Soft tissue targets can be accurately localized in MR images, followed by automatic alignment of the needle trajectory to the target. Intra-procedure visualization of the needle target location and the needle were confirmed through MR images after needle insertion. The effects of geometric distortions and signal noise were found to be below threshold that would have an impact on the accuracy of the system. The system was found to have negligible effect on the MR image signal noise and geometric distortion. The system was mechanically calibrated and the mean image-guided needle targeting and needle trajectory accuracies were quantified in an image-guided tissue mimicking phantom experiment to be 178 ± 54 μm and 0.27 ± 0.65°, respectively. An MR image-guided system for in-bore needle deliveries to soft tissue targets in small animal models has been developed. The results of the needle targeting accuracy experiments in phantoms indicate that this system has the potential to deliver needles to the smallest soft tissue structures relevant in preclinical studies, at a wide variety of needle trajectories. Future work in the form of a fully-automated needle driver with precise depth control would benefit this system in terms of its applicability to a wider range of animal models and organ targets. © 2017 American Association of Physicists in Medicine.
Development of in vitro and in vivo neutralization assays based on the pseudotyped H7N9 virus.
Tian, Yabin; Zhao, Hui; Liu, Qiang; Zhang, Chuntao; Nie, Jianhui; Huang, Weijing; Li, Changgui; Li, Xuguang; Wang, Youchun
2018-05-31
H7N9 viral infections pose a great threat to both animal and human health. This avian virus cannot be handled in level 2 biocontainment laboratories, substantially hindering evaluation of prophylactic vaccines and therapeutic agents. Here, we report a high-titer pseudoviral system with a bioluminescent reporter gene, enabling us to visually and quantitatively conduct analyses of virus replications in both tissue cultures and animals. For evaluation of immunogenicity of H7N9 vaccines, we developed an in vitro assay for neutralizing antibody measurement based on the pseudoviral system; results generated by the in vitro assay were found to be strongly correlated with those by either hemagglutination inhibition (HI) or micro-neutralization (MN) assay. Furthermore, we injected the viruses into Balb/c mice and observed dynamic distributions of the viruses in the animals, which provides an ideal imaging model for quantitative analyses of prophylactic and therapeutic monoclonal antibodies. Taken together, the pseudoviral systems reported here could be of great value for both in vitro and in vivo evaluations of vaccines and antiviral agents without the need of wild type H7N9 virus.
Multimodality animal rotation imaging system (Mars) for in vivo detection of intraperitoneal tumors.
Pizzonia, John; Holmberg, Jennie; Orton, Sean; Alvero, Ayesha; Viteri, Oscar; McLaughlin, William; Feke, Gil; Mor, Gil
2012-01-01
PROBLEM Ovarian cancer stem cells (OCSCs) have been postulated as the potential source of recurrence and chemoresistance. Therefore identification of OvCSC and their complete removal is a pivotal stage for the treatment of ovarian cancer. The objective of the following study was to develop a new in vivo imaging model that allows for the detection and monitoring of OCSCs. METHOD OF STUDY OCSCs were labeled with X-Sight 761 Nanospheres and injected intra-peritoneally (i.p.) and sub-cutaneously (s.c.) to Athymic nude mice. The Carestream In-Vivo Imaging System FX was used to obtain X-ray and, concurrently, near-infrared fluorescence images. Tumor images in the mouse were observed from different angles by automatic rotation of the mouse. RESULTS X-Sight 761 Nanospheres labeled almost 100% of the cells. No difference on growth rate was observed between labeled and unlabeled cells. Tumors were observed and monitoring revealed strong signaling up to 21 days. CONCLUSION We describe the use of near-infrared nanoparticle probes for in vivo imaging of metastatic ovarian cancer models. Visualization of multiple sites around the animals was enhanced with the use of the Carestream Multimodal Animal Rotation System. © 2011 John Wiley & Sons A/S.
In vivo imaging of small animals with optical tomography and near-infrared fluorescent probes
NASA Astrophysics Data System (ADS)
Palmer, Matthew R.; Shibata, Yasushi; Kruskal, Jonathan B.; Lenkinski, Robert E.
2002-06-01
A developmental optical tomography has been designed for imaging small animals in vivo using near IR fluorophores. The system employs epi-illumination via a 450 W Xe arc lamp, filtered and collimated to illuminate a 10 cm square movable stage. Emission light is filtered then collected by a high- resolution, high quantum efficiency, cooled CCD camera. Stage movement and image acquisition are under the control of a personal computer running system integration and automation software. During an experiment, the anesthetized animal is secured to the stage and up to 200 projections can be acquired over 180 degrees rotation. Angular sampling of the light distribution at a point on the surface is used to determine relative contributions form ballistic and diffuse photons. We have employed the system to investigate a number of applications of in-vivo fluorescent imaging. In dynamic studies, hepatic function has been visualized in nude mice following intravenous injection of indocyanine green (ICG) and cerebrospinal fluid flow as been measured by injection of ICG-lipoprotein conjugate in the subarachnoid space of the lumbar spine followed by dynamic imaging of the brain. Further applications in physiological imaging, cancer detection, and molecular imaging are under investigation in our laboratory.
NASA Astrophysics Data System (ADS)
Tian, Chao; Zhang, Wei; Nguyen, Van Phuc; Huang, Ziyi; Wang, Xueding; Paulus, Yannis M.
2018-02-01
Current clinical available retinal imaging techniques have limitations, including limited depth of penetration or requirement for the invasive injection of exogenous contrast agents. Here, we developed a novel multimodal imaging system for high-speed, high-resolution retinal imaging of larger animals, such as rabbits. The system integrates three state-of-the-art imaging modalities, including photoacoustic microscopy (PAM), optical coherence tomography (OCT), and fluorescence microscopy (FM). In vivo experimental results of rabbit eyes show that the PAM is able to visualize laser-induced retinal burns and distinguish individual eye blood vessels using a laser exposure dose of 80 nJ, which is well below the American National Standards Institute (ANSI) safety limit 160 nJ. The OCT can discern different retinal layers and visualize laser burns and choroidal detachments. The novel multi-modal imaging platform holds great promise in ophthalmic imaging.
Molecular and Cellular Biology Animations: Development and Impact on Student Learning
2005-01-01
Educators often struggle when teaching cellular and molecular processes because typically they have only two-dimensional tools to teach something that plays out in four dimensions. Learning research has demonstrated that visualizing processes in three dimensions aids learning, and animations are effective visualization tools for novice learners and aid with long-term memory retention. The World Wide Web Instructional Committee at North Dakota State University has used these research results as an inspiration to develop a suite of high-quality animations of molecular and cellular processes. Currently, these animations represent transcription, translation, bacterial gene expression, messenger RNA (mRNA) processing, mRNA splicing, protein transport into an organelle, the electron transport chain, and the use of a biological gradient to drive adenosine triphosphate synthesis. These animations are integrated with an educational module that consists of First Look and Advanced Look components that feature captioned stills from the animation representing the key steps in the processes at varying levels of complexity. These animation-based educational modules are available via the World Wide Web at http://vcell.ndsu.edu/animations. An in-class research experiment demonstrated that student retention of content material was significantly better when students received a lecture coupled with the animations and then used the animation as an individual study activity. PMID:15917875
NASA Astrophysics Data System (ADS)
Reuter, Jewel Jurovich
The purpose of this exploratory research was to study how students learn photosynthesis and cellular respiration and to determine the value added to the student's learning by each of the three technology-scaffolded learning strategy components (animated concept presentations and WebQuest-style activities, data collection, and student-constructed animations) of the BioDatamation(TM) (BDM) Program. BDM learning strategies utilized the Theory of Interacting Visual Fields(TM) (TIVF) (Reuter & Wandersee, 2002a, 2002b; 2003a, 2003b) which holds that meaningful knowledge is hierarchically constructed using the past, present, and future visual fields, with visual metacognitive components that are derived from the principles of Visual Behavior (Jones, 1995), Human Constructivist Theory (Mintzes & Wandersee, 1998a), and Visual Information Design Theory (Tufte, 1990, 1997, 2001). Student alternative conceptions of photosynthesis and cellular respiration were determined by the item analysis of 263,267 Biology Advanced Placement Examinations and were used to develop the BDM instructional strategy and interview questions. The subjects were 24 undergraduate students of high and low biology prior knowledge enrolled in an introductory-level General Biology course at a major research university in the Deep South. Fifteen participants received BDM instruction which included original and innovative learning materials and laboratories in 6 phases; 8 of the 15 participants were the subject of in depth, extended individual analysis. The other 9 participants received traditional, non-BDM instruction. Interviews which included participants' creation of concept maps and visual field diagrams were conducted after each phase. Various content analyses, including Chi's Verbal Analysis and quantitizing/qualitizing were used for data analysis. The total value added to integrative knowledge during BDM instruction with the three visual fields was an average increase of 56% for cellular respiration and 62% increase for photosynthesis knowledge, improved long-term memory of concepts, and enhanced biological literacy to the multidimensional level, as determined by the BSCS literacy model. WebQuest-style activities and data collection provided for animated prior knowledge in the past visual field, and detailed content knowledge construction in the present visual field. During student construction of animated presentations, layering required participants to think by rearranging words and images for improved hierarchical organization of knowledge with real-life applications.
3D Scientific Visualization with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2015-03-01
This is the first book written on using Blender (an open source visualization suite widely used in the entertainment and gaming industries) for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.
Supplementation with macular carotenoids improves visual performance of transgenic mice.
Li, Binxing; Rognon, Gregory T; Mattinson, Ty; Vachali, Preejith P; Gorusupudi, Aruna; Chang, Fu-Yen; Ranganathan, Arunkumar; Nelson, Kelly; George, Evan W; Frederick, Jeanne M; Bernstein, Paul S
2018-07-01
Carotenoid supplementation can improve human visual performance, but there is still no validated rodent model to test their effects on visual function in laboratory animals. We recently showed that mice deficient in β-carotene oxygenase 2 (BCO2) and/or β-carotene oxygenase 1 (BCO1) enzymes can accumulate carotenoids in their retinas, allowing us to investigate the effects of carotenoids on the visual performance of mice. Using OptoMotry, a device to measure visual function in rodents, we examined the effect of zeaxanthin, lutein, and β-carotene on visual performance of various BCO knockout mice. We then transgenically expressed the human zeaxanthin-binding protein GSTP1 (hGSTP1) in the rods of bco2 -/- mice to examine if delivering more zeaxanthin to retina will improve their visual function further. The visual performance of bco2 -/- mice fed with zeaxanthin or lutein was significantly improved relative to control mice fed with placebo beadlets. β-Carotene had no significant effect in bco2 -/- mice but modestly improved cone visual function of bco1 -/- mice. Expression of hGSTP1 in the rods of bco2 -/- mice resulted in a 40% increase of retinal zeaxanthin and further improvement of visual performance. This work demonstrates that these "macular pigment mice" may serve as animal models to study carotenoid function in the retina. Copyright © 2018 Elsevier Inc. All rights reserved.
MoZis: mobile zoo information system: a case study for the city of Osnabrueck
NASA Astrophysics Data System (ADS)
Michel, Ulrich
2007-10-01
This paper describes a new project of the Institute for Geoinformatics and Remote Sensing, funded by the German Federal Foundation for the Environment (DBU, Deutsche Bundesstiftung Umwelt www.dbu.de). The goal of this project is to develop a mobile zoo information system for Pocket PCs and Smart phones. Visitors of the zoo will be able to use their own mobile devices or use Pocket PCs, which could be borrowed from the zoo to navigate around the zoo's facilities. The system will also provide additional multimedia based information such as audio-based material, animal video clips, and maps of their natural habitat. People could have access to the project at the zoo via wireless local area network or by downloading the necessary files using a home internet connection. Our software environment consists of proprietary and non-proprietary software solutions in order to make it as flexible as possible. Our first prototype was developed with Visual Studio 2003 and Visual Basic.Net.
TrackPlot Enhancements: Support for Multiple Animal Tracks and Gyros
2015-09-30
visualization and kinematic analysis of marine animal movements derived from archival tag data. Tags are supported that have sensors for pressure, acceleration...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. TrackPlot Enhancements: Support for Multiple Animal ...in combination with accelerometer and magnetometer data. 2) the extraction and frequency analysis of accelerations and rotation in animal
40 CFR 79.64 - In vivo micronucleus assay.
Code of Federal Regulations, 2011 CFR
2011-07-01
... number and sex. At least five female and five male animals per experimental/sample and control group... control group. A single concentration of a compound known to produce micronuclei in vivo is adequate as a... in bone marrow from treated animals compared to that of control animals. The visualization of...
40 CFR 79.64 - In vivo micronucleus assay.
Code of Federal Regulations, 2014 CFR
2014-07-01
... number and sex. At least five female and five male animals per experimental/sample and control group... control group. A single concentration of a compound known to produce micronuclei in vivo is adequate as a... in bone marrow from treated animals compared to that of control animals. The visualization of...
40 CFR 79.64 - In vivo micronucleus assay.
Code of Federal Regulations, 2012 CFR
2012-07-01
... number and sex. At least five female and five male animals per experimental/sample and control group... control group. A single concentration of a compound known to produce micronuclei in vivo is adequate as a... in bone marrow from treated animals compared to that of control animals. The visualization of...
40 CFR 79.64 - In vivo micronucleus assay.
Code of Federal Regulations, 2010 CFR
2010-07-01
... number and sex. At least five female and five male animals per experimental/sample and control group... control group. A single concentration of a compound known to produce micronuclei in vivo is adequate as a... in bone marrow from treated animals compared to that of control animals. The visualization of...
40 CFR 79.64 - In vivo micronucleus assay.
Code of Federal Regulations, 2013 CFR
2013-07-01
... number and sex. At least five female and five male animals per experimental/sample and control group... control group. A single concentration of a compound known to produce micronuclei in vivo is adequate as a... in bone marrow from treated animals compared to that of control animals. The visualization of...
European Starlings Are Capable of Discriminating Subtle Size Asymmetries in Paired Stimuli
ERIC Educational Resources Information Center
Swaddle, John P.; Johnson, Charles W.
2007-01-01
Small deviations from bilateral symmetry (fluctuating asymmetries) are cues to fitness differences in some animals. Therefore, researchers have considered whether animals use these small asymmetries as visual cues to determine appropriate behavioral responses (e.g., mate preferences). However, there have been few systematic studies of animals'…
ERIC Educational Resources Information Center
Tech Directions, 2008
2008-01-01
Art and animation work is the most significant part of electronic game development, but is also found in television commercials, computer programs, the Internet, comic books, and in just about every visual media imaginable. It is the part of the project that makes an abstract design idea concrete and visible. Animators create the motion of life in…
NASA Astrophysics Data System (ADS)
Dalton, Rebecca Marie
The development of student's mental models of chemical substances and processes at the molecular level was studied in a three-phase project. Animations produced in the VisChem project were used as an integral part of the chemistry instruction to help students develop their mental models. Phase one of the project involved examining the effectiveness of using animations to help first-year university chemistry students develop useful mental models of chemical phenomena. Phase two explored factors affecting the development of student's mental models, analysing results in terms of a proposed model of the perceptual processes involved in interpreting an animation. Phase three involved four case studies that served to confirm and elaborate on the effects of prior knowledge and disembedding ability on student's mental model development, and support the influence of study style on learning outcomes. Recommendations for use of the VisChem animations, based on the above findings, include: considering the prior knowledge of students; focusing attention on relevant features; encouraging a deep approach to learning; using animation to teach visual concepts; presenting ideas visually, verbally and conceptually; establishing 'animation literacy'; minimising cognitive load; using animation as feedback; using student drawings; repeating animations; and discussing 'scientific modelling'.
Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio)
Shang, Chunfeng; Yang, Wenbin; Bai, Lu; Du, Jiulin
2017-01-01
The internal brain dynamics that link sensation and action are arguably better studied during natural animal behaviors. Here, we report on a novel volume imaging and 3D tracking technique that monitors whole brain neural activity in freely swimming larval zebrafish (Danio rerio). We demonstrated the capability of our system through functional imaging of neural activity during visually evoked and prey capture behaviors in larval zebrafish. PMID:28930070
Nonlinear circuits for naturalistic visual motion estimation
Fitzgerald, James E; Clark, Damon A
2015-01-01
Many animals use visual signals to estimate motion. Canonical models suppose that animals estimate motion by cross-correlating pairs of spatiotemporally separated visual signals, but recent experiments indicate that humans and flies perceive motion from higher-order correlations that signify motion in natural environments. Here we show how biologically plausible processing motifs in neural circuits could be tuned to extract this information. We emphasize how known aspects of Drosophila's visual circuitry could embody this tuning and predict fly behavior. We find that segregating motion signals into ON/OFF channels can enhance estimation accuracy by accounting for natural light/dark asymmetries. Furthermore, a diversity of inputs to motion detecting neurons can provide access to more complex higher-order correlations. Collectively, these results illustrate how non-canonical computations improve motion estimation with naturalistic inputs. This argues that the complexity of the fly's motion computations, implemented in its elaborate circuits, represents a valuable feature of its visual motion estimator. DOI: http://dx.doi.org/10.7554/eLife.09123.001 PMID:26499494
Camouflage, communication and thermoregulation: lessons from colour changing organisms.
Stuart-Fox, Devi; Moussalli, Adnan
2009-02-27
Organisms capable of rapid physiological colour change have become model taxa in the study of camouflage because they are able to respond dynamically to the changes in their visual environment. Here, we briefly review the ways in which studies of colour changing organisms have contributed to our understanding of camouflage and highlight some unique opportunities they present. First, from a proximate perspective, comparison of visual cues triggering camouflage responses and the visual perception mechanisms involved can provide insight into general visual processing rules. Second, colour changing animals can potentially tailor their camouflage response not only to different backgrounds but also to multiple predators with different visual capabilities. We present new data showing that such facultative crypsis may be widespread in at least one group, the dwarf chameleons. From an ultimate perspective, we argue that colour changing organisms are ideally suited to experimental and comparative studies of evolutionary interactions between the three primary functions of animal colour patterns: camouflage; communication; and thermoregulation.
Camouflage, communication and thermoregulation: lessons from colour changing organisms
Stuart-Fox, Devi; Moussalli, Adnan
2008-01-01
Organisms capable of rapid physiological colour change have become model taxa in the study of camouflage because they are able to respond dynamically to the changes in their visual environment. Here, we briefly review the ways in which studies of colour changing organisms have contributed to our understanding of camouflage and highlight some unique opportunities they present. First, from a proximate perspective, comparison of visual cues triggering camouflage responses and the visual perception mechanisms involved can provide insight into general visual processing rules. Second, colour changing animals can potentially tailor their camouflage response not only to different backgrounds but also to multiple predators with different visual capabilities. We present new data showing that such facultative crypsis may be widespread in at least one group, the dwarf chameleons. From an ultimate perspective, we argue that colour changing organisms are ideally suited to experimental and comparative studies of evolutionary interactions between the three primary functions of animal colour patterns: camouflage; communication; and thermoregulation. PMID:19000973
Ikkatai, Yuko; Okanoya, Kazuo; Seki, Yoshimasa
2016-07-01
Humans communicate with one another not only face-to-face but also via modern telecommunication methods such as television and video conferencing. We readily detect the difference between people actively communicating with us and people merely acting via a broadcasting system. We developed an animal model of this novel communication method seen in humans to determine whether animals also make this distinction. We built a system for two animals to interact via audio-visual equipment in real-time, to compare behavioral differences between two conditions, an "interactive two-way condition" and a "non-interactive (one-way) condition." We measured birds' responses to stimuli which appeared in these two conditions. We used budgerigars, which are small, gregarious birds, and found that the frequency of vocal interaction with other individuals did not differ between the two conditions. However, body synchrony between the two birds was observed more often in the interactive condition, suggesting budgerigars recognized the difference between these interactive and non-interactive conditions on some level. Copyright © 2016 Elsevier B.V. All rights reserved.
Retrospective respiration-gated whole-body photoacoustic computed tomography of mice
NASA Astrophysics Data System (ADS)
Xia, Jun; Chen, Wanyi; Maslov, Konstantin; Anastasio, Mark A.; Wang, Lihong V.
2014-01-01
Photoacoustic tomography (PAT) is an emerging technique that has a great potential for preclinical whole-body imaging. To date, most whole-body PAT systems require multiple laser shots to generate one cross-sectional image, yielding a frame rate of <1 Hz. Because a mouse breathes at up to 3 Hz, without proper gating mechanisms, acquired images are susceptible to motion artifacts. Here, we introduce, for the first time to our knowledge, retrospective respiratory gating for whole-body photoacoustic computed tomography. This new method involves simultaneous capturing of the animal's respiratory waveform during photoacoustic data acquisition. The recorded photoacoustic signals are sorted and clustered according to the respiratory phase, and an image of the animal at each respiratory phase is reconstructed subsequently from the corresponding cluster. The new method was tested in a ring-shaped confocal photoacoustic computed tomography system with a hardware-limited frame rate of 0.625 Hz. After respiratory gating, we observed sharper vascular and anatomical images at different positions of the animal body. The entire breathing cycle can also be visualized at 20 frames/cycle.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt
2013-01-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
NASA Astrophysics Data System (ADS)
Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.
2013-12-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
CloudSat Reflectivity Data Visualization Inside Hurricanes
NASA Technical Reports Server (NTRS)
Suzuki, Shigeru; Wright, John R.; Falcon, Pedro C.
2011-01-01
Animations and other outreach products have been created and released to the public quickly after the CloudSat spacecraft flew over hurricanes. The automated script scans through the CloudSat quicklook data to find significant atmospheric moisture content. Once such a region is found, data from multiple sources is combined to produce the data products and the animations. KMZ products are quickly generated from the quicklook data for viewing in Google Earth and other tools. Animations are also generated to show the atmospheric moisture data in context with the storm cloud imagery. Global images from GOES satellites are shown to give context. The visualization provides better understanding of the interior of the hurricane storm clouds, which is difficult to observe directly. The automated process creates the finished animation in the High Definition (HD) video format for quick release to the media and public.
Chordate evolution and the origin of craniates: an old brain in a new head.
Butler, A B
2000-06-15
The earliest craniates achieved a unique condition among bilaterally symmetrical animals: they possessed enlarged, elaborated brains with paired sense organs and unique derivatives of neural crest and placodal tissues, including peripheral sensory ganglia, visceral arches, and head skeleton. The craniate sister taxon, cephalochordates, has rostral portions of the neuraxis that are homologous to some of the major divisions of craniate brains. Moreover, recent data indicate that many genes involved in patterning the nervous system are common to all bilaterally symmetrical animals and have been inherited from a common ancestor. Craniates, thus, have an "old" brain in a new head, due to re-expression of these anciently acquired genes. The transition to the craniate brain from a cephalochordate-like ancestral form may have involved a mediolateral shift in expression of the genes that specify nervous system development from various parts of the ectoderm. It is suggested here that the transition was sequential. The first step involved the presence of paired, lateral eyes, elaboration of the alar plate, and enhancement of the descending visual pathway to brainstem motor centers. Subsequently, this central visual pathway served as a template for the additional sensory systems that were elaborated and/or augmented with the "bloom" of migratory neural crest and placodes. This model accounts for the marked uniformity of pattern across central sensory pathways and for the lack of any neural crest-placode cranial nerve for either the diencephalon or mesencephalon. Anat Rec (New Anat) 261:111-125, 2000. Copyright 2000 Wiley-Liss, Inc.
Neuroprotection in a Novel Mouse Model of Multiple Sclerosis
Lidster, Katie; Jackson, Samuel J.; Ahmed, Zubair; Munro, Peter; Coffey, Pete; Giovannoni, Gavin; Baker, Mark D.; Baker, David
2013-01-01
Multiple sclerosis is an immune-mediated, demyelinating and neurodegenerative disease that currently lacks any neuroprotective treatments. Innovative neuroprotective trial designs are required to hasten the translational process of drug development. An ideal target to monitor the efficacy of strategies aimed at treating multiple sclerosis is the visual system, which is the most accessible part of the human central nervous system. A novel C57BL/6 mouse line was generated that expressed transgenes for a myelin oligodendrocyte glycoprotein-specific T cell receptor and a retinal ganglion cell restricted-Thy1 promoter-controlled cyan fluorescent protein. This model develops spontaneous or induced optic neuritis, in the absence of paralytic disease normally associated with most rodent autoimmune models of multiple sclerosis. Demyelination and neurodegeneration could be monitored longitudinally in the living animal using electrophysiology, visual sensitivity, confocal scanning laser ophthalmoscopy and optical coherence tomography all of which are relevant to human trials. This model offers many advantages, from a 3Rs, economic and scientific perspective, over classical experimental autoimmune encephalomyelitis models that are associated with substantial suffering of animals. Optic neuritis in this model led to inflammatory damage of axons in the optic nerve and subsequent loss of retinal ganglion cells in the retina. This was inhibited by the systemic administration of a sodium channel blocker (oxcarbazepine) or intraocular treatment with siRNA targeting caspase-2. These novel approaches have relevance to the future treatment of neurodegeneration of MS, which has so far evaded treatment. PMID:24223903
Knowledge-based vision and simple visual machines.
Cliff, D; Noble, J
1997-01-01
The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684
The dorsal raphe modulates sensory responsiveness during arousal in zebrafish
Yokogawa, Tohei; Hannan, Markus C.; Burgess, Harold A.
2012-01-01
During waking behavior animals adapt their state of arousal in response to environmental pressures. Sensory processing is regulated in aroused states and several lines of evidence imply that this is mediated at least partly by the serotonergic system. However there is little information directly showing that serotonergic function is required for state-dependent modulation of sensory processing. Here we find that zebrafish larvae can maintain a short-term state of arousal during which neurons in the dorsal raphe modulate sensory responsiveness to behaviorally relevant visual cues. Following a brief exposure to water flow, larvae show elevated activity and heightened sensitivity to perceived motion. Calcium imaging of neuronal activity after flow revealed increased activity in serotonergic neurons of the dorsal raphe. Genetic ablation of these neurons abolished the increase in visual sensitivity during arousal without affecting baseline visual function or locomotor activity. We traced projections from the dorsal raphe to a major visual area, the optic tectum. Laser ablation of the tectum demonstrated that this structure, like the dorsal raphe, is required for improved visual sensitivity during arousal. These findings reveal that serotonergic neurons of the dorsal raphe have a state-dependent role in matching sensory responsiveness to behavioral context. PMID:23100441
Light and the laboratory mouse.
Peirson, Stuart N; Brown, Laurence A; Pothecary, Carina A; Benson, Lindsay A; Fisk, Angus S
2018-04-15
Light exerts widespread effects on physiology and behaviour. As well as the widely-appreciated role of light in vision, light also plays a critical role in many non-visual responses, including regulating circadian rhythms, sleep, pupil constriction, heart rate, hormone release and learning and memory. In mammals, responses to light are all mediated via retinal photoreceptors, including the classical rods and cones involved in vision as well as the recently identified melanopsin-expressing photoreceptive retinal ganglion cells (pRGCs). Understanding the effects of light on the laboratory mouse therefore depends upon an appreciation of the physiology of these retinal photoreceptors, including their differing sens itivities to absolute light levels and wavelengths. The signals from these photoreceptors are often integrated, with different responses involving distinct retinal projections, making generalisations challenging. Furthermore, many commonly used laboratory mouse strains carry mutations that affect visual or non-visual physiology, ranging from inherited retinal degeneration to genetic differences in sleep and circadian rhythms. Here we provide an overview of the visual and non-visual systems before discussing practical considerations for the use of light for researchers and animal facility staff working with laboratory mice. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Real-time speech-driven animation of expressive talking faces
NASA Astrophysics Data System (ADS)
Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli
2011-05-01
In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.
Nakayasu, Tomohiro; Yasugi, Masaki; Shiraishi, Soma; Uchida, Seiichi; Watanabe, Eiji
2017-01-01
We studied social approach behaviour in medaka fish using three-dimensional computer graphic (3DCG) animations based on the morphological features and motion characteristics obtained from real fish. This is the first study which used 3DCG animations and examined the relative effects of morphological and motion cues on social approach behaviour in medaka. Various visual stimuli, e.g., lack of motion, lack of colour, alternation in shape, lack of locomotion, lack of body motion, and normal virtual fish in which all four features (colour, shape, locomotion, and body motion) were reconstructed, were created and presented to fish using a computer display. Medaka fish presented with normal virtual fish spent a long time in proximity to the display, whereas time spent near the display was decreased in other groups when compared with normal virtual medaka group. The results suggested that the naturalness of visual cues contributes to the induction of social approach behaviour. Differential effects between body motion and locomotion were also detected. 3DCG animations can be a useful tool to study the mechanisms of visual processing and social behaviour in medaka.
Nakayasu, Tomohiro; Yasugi, Masaki; Shiraishi, Soma; Uchida, Seiichi; Watanabe, Eiji
2017-01-01
We studied social approach behaviour in medaka fish using three-dimensional computer graphic (3DCG) animations based on the morphological features and motion characteristics obtained from real fish. This is the first study which used 3DCG animations and examined the relative effects of morphological and motion cues on social approach behaviour in medaka. Various visual stimuli, e.g., lack of motion, lack of colour, alternation in shape, lack of locomotion, lack of body motion, and normal virtual fish in which all four features (colour, shape, locomotion, and body motion) were reconstructed, were created and presented to fish using a computer display. Medaka fish presented with normal virtual fish spent a long time in proximity to the display, whereas time spent near the display was decreased in other groups when compared with normal virtual medaka group. The results suggested that the naturalness of visual cues contributes to the induction of social approach behaviour. Differential effects between body motion and locomotion were also detected. 3DCG animations can be a useful tool to study the mechanisms of visual processing and social behaviour in medaka. PMID:28399163
Cumulative latency advance underlies fast visual processing in desynchronized brain state
Wang, Xu-dong; Chen, Cheng; Zhang, Dinghong; Yao, Haishan
2014-01-01
Fast sensory processing is vital for the animal to efficiently respond to the changing environment. This is usually achieved when the animal is vigilant, as reflected by cortical desynchronization. However, the neural substrate for such fast processing remains unclear. Here, we report that neurons in rat primary visual cortex (V1) exhibited shorter response latency in the desynchronized state than in the synchronized state. In vivo whole-cell recording from the same V1 neurons undergoing the two states showed that both the resting and visually evoked conductances were higher in the desynchronized state. Such conductance increases of single V1 neurons shorten the response latency by elevating the membrane potential closer to the firing threshold and reducing the membrane time constant, but the effects only account for a small fraction of the observed latency advance. Simultaneous recordings in lateral geniculate nucleus (LGN) and V1 revealed that LGN neurons also exhibited latency advance, with a degree smaller than that of V1 neurons. Furthermore, latency advance in V1 increased across successive cortical layers. Thus, latency advance accumulates along various stages of the visual pathway, likely due to a global increase of membrane conductance in the desynchronized state. This cumulative effect may lead to a dramatic shortening of response latency for neurons in higher visual cortex and play a critical role in fast processing for vigilant animals. PMID:24347634
Cumulative latency advance underlies fast visual processing in desynchronized brain state.
Wang, Xu-dong; Chen, Cheng; Zhang, Dinghong; Yao, Haishan
2014-01-07
Fast sensory processing is vital for the animal to efficiently respond to the changing environment. This is usually achieved when the animal is vigilant, as reflected by cortical desynchronization. However, the neural substrate for such fast processing remains unclear. Here, we report that neurons in rat primary visual cortex (V1) exhibited shorter response latency in the desynchronized state than in the synchronized state. In vivo whole-cell recording from the same V1 neurons undergoing the two states showed that both the resting and visually evoked conductances were higher in the desynchronized state. Such conductance increases of single V1 neurons shorten the response latency by elevating the membrane potential closer to the firing threshold and reducing the membrane time constant, but the effects only account for a small fraction of the observed latency advance. Simultaneous recordings in lateral geniculate nucleus (LGN) and V1 revealed that LGN neurons also exhibited latency advance, with a degree smaller than that of V1 neurons. Furthermore, latency advance in V1 increased across successive cortical layers. Thus, latency advance accumulates along various stages of the visual pathway, likely due to a global increase of membrane conductance in the desynchronized state. This cumulative effect may lead to a dramatic shortening of response latency for neurons in higher visual cortex and play a critical role in fast processing for vigilant animals.
Serotonin Decreases the Gain of Visual Responses in Awake Macaque V1.
Seillier, Lenka; Lorenz, Corinna; Kawaguchi, Katsuhisa; Ott, Torben; Nieder, Andreas; Pourriahi, Paria; Nienborg, Hendrikje
2017-11-22
Serotonin, an important neuromodulator in the brain, is implicated in affective and cognitive functions. However, its role even for basic cortical processes is controversial. For example, in the mammalian primary visual cortex (V1), heterogenous serotonergic modulation has been observed in anesthetized animals. Here, we combined extracellular single-unit recordings with iontophoresis in awake animals. We examined the role of serotonin on well-defined tuning properties (orientation, spatial frequency, contrast, and size) in V1 of two male macaque monkeys. We find that in the awake macaque the modulatory effect of serotonin is surprisingly uniform: it causes a mainly multiplicative decrease of the visual responses and a slight increase in the stimulus-selective response latency. Moreover, serotonin neither systematically changes the selectivity or variability of the response, nor the interneuronal correlation unexplained by the stimulus ("noise-correlation"). The modulation by serotonin has qualitative similarities with that for a decrease in stimulus contrast, but differs quantitatively from decreasing contrast. It can be captured by a simple additive change to a threshold-linear spiking nonlinearity. Together, our results show that serotonin is well suited to control the response gain of neurons in V1 depending on the animal's behavioral or motivational context, complementing other known state-dependent gain-control mechanisms. SIGNIFICANCE STATEMENT Serotonin is an important neuromodulator in the brain and a major target for drugs used to treat psychiatric disorders. Nonetheless, surprisingly little is known about how it shapes information processing in sensory areas. Here we examined the serotonergic modulation of visual processing in the primary visual cortex of awake behaving macaque monkeys. We found that serotonin mainly decreased the gain of the visual responses, without systematically changing their selectivity, variability, or covariability. This identifies a simple computational function of serotonin for state-dependent sensory processing, depending on the animal's affective or motivational state. Copyright © 2017 Seillier, Lorenz et al.
Serotonin Decreases the Gain of Visual Responses in Awake Macaque V1
Seillier, Lenka; Lorenz, Corinna; Kawaguchi, Katsuhisa; Ott, Torben; Pourriahi, Paria
2017-01-01
Serotonin, an important neuromodulator in the brain, is implicated in affective and cognitive functions. However, its role even for basic cortical processes is controversial. For example, in the mammalian primary visual cortex (V1), heterogenous serotonergic modulation has been observed in anesthetized animals. Here, we combined extracellular single-unit recordings with iontophoresis in awake animals. We examined the role of serotonin on well-defined tuning properties (orientation, spatial frequency, contrast, and size) in V1 of two male macaque monkeys. We find that in the awake macaque the modulatory effect of serotonin is surprisingly uniform: it causes a mainly multiplicative decrease of the visual responses and a slight increase in the stimulus-selective response latency. Moreover, serotonin neither systematically changes the selectivity or variability of the response, nor the interneuronal correlation unexplained by the stimulus (“noise-correlation”). The modulation by serotonin has qualitative similarities with that for a decrease in stimulus contrast, but differs quantitatively from decreasing contrast. It can be captured by a simple additive change to a threshold-linear spiking nonlinearity. Together, our results show that serotonin is well suited to control the response gain of neurons in V1 depending on the animal's behavioral or motivational context, complementing other known state-dependent gain-control mechanisms. SIGNIFICANCE STATEMENT Serotonin is an important neuromodulator in the brain and a major target for drugs used to treat psychiatric disorders. Nonetheless, surprisingly little is known about how it shapes information processing in sensory areas. Here we examined the serotonergic modulation of visual processing in the primary visual cortex of awake behaving macaque monkeys. We found that serotonin mainly decreased the gain of the visual responses, without systematically changing their selectivity, variability, or covariability. This identifies a simple computational function of serotonin for state-dependent sensory processing, depending on the animal's affective or motivational state. PMID:29042433