Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Piao, Jin-Chun; Kim, Shin-Dug
2017-11-07
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images
Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi
2016-01-01
Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704
RGB-D SLAM Combining Visual Odometry and Extended Information Filter
Zhang, Heng; Liu, Yanli; Tan, Jindong; Xiong, Naixue
2015-01-01
In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm. PMID:26263990
A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.
Song, Yu; Nuske, Stephen; Scherer, Sebastian
2016-12-22
State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.
The precision of locomotor odometry in humans.
Durgin, Frank H; Akagi, Mikio; Gallistel, Charles R; Haiken, Woody
2009-03-01
Two experiments measured the human ability to reproduce locomotor distances of 4.6-100 m without visual feedback and compared distance production with time production. Subjects were not permitted to count steps. It was found that the precision of human odometry follows Weber's law that variability is proportional to distance. The coefficients of variation for distance production were much lower than those measured for time production for similar durations. Gait parameters recorded during the task (average step length and step frequency) were found to be even less variable suggesting that step integration could be the basis for non-visual human odometry.
Visual Odometry for Autonomous Deep-Space Navigation
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Visual Odometry fills two critical needs shared by all future exploration architectures considered by NASA: Autonomous Rendezvous and Docking (AR&D), and autonomous navigation during loss of comm. To do this, a camera is combined with cutting-edge algorithms (called Visual Odometry) into a unit that provides accurate relative pose between the camera and the object in the imagery. Recent simulation analyses have demonstrated the ability of this new technology to reliably, accurately, and quickly compute a relative pose. This project advances this technology by both preparing the system to process flight imagery and creating an activity to capture said imagery. This technology can provide a pioneering optical navigation platform capable of supporting a wide variety of future missions scenarios: deep space rendezvous, asteroid exploration, loss-of-comm.
A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors
Song, Yu; Nuske, Stephen; Scherer, Sebastian
2016-01-01
State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight. PMID:28025524
Single-camera visual odometry to track a surgical X-ray C-arm base.
Esfandiari, Hooman; Lichti, Derek; Anglin, Carolyn
2017-12-01
This study provides a framework for a single-camera odometry system for localizing a surgical C-arm base. An application-specific monocular visual odometry system (a downward-looking consumer-grade camera rigidly attached to the C-arm base) is proposed in this research. The cumulative dead-reckoning estimation of the base is extracted based on frame-to-frame homography estimation. Optical-flow results are utilized to feed the odometry. Online positional and orientation parameters are then reported. Positional accuracy of better than 2% (of the total traveled distance) for most of the cases and 4% for all the cases studied and angular accuracy of better than 2% (of absolute cumulative changes in orientation) were achieved with this method. This study provides a robust and accurate tracking framework that not only can be integrated with the current C-arm joint-tracking system (i.e. TC-arm) but also is capable of being employed for similar applications in other fields (e.g. robotics).
Applicability of Deep-Learning Technology for Relative Object-Based Navigation
2017-09-01
burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing...possible selections for navigating an unmanned ground vehicle (UGV) is through real- time visual odometry. To navigate in such an environment, the UGV...UGV) is through real- time visual odometry. To navigate in such an environment, the UGV needs to be able to detect, identify, and relate the static
A Coordinated Control Architecture for Disaster Response Robots
2016-01-01
to use these same algorithms to provide navigation Odometry for the vehicle motions when the robot is driving. Visual Odometry The YouTube link... depressed the accelerator pedal. We relied on the fact that the vehicle quickly comes to rest when the accelerator pedal is not being pressed. The
Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye
2014-01-01
This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109
Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation
Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin
2014-01-01
Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780
Intelligent Behavioral Action Aiding for Improved Autonomous Image Navigation
2012-09-13
odometry, SICK laser scanning unit ( Lidar ), Inertial Measurement Unit (IMU) and ultrasonic distance measurement system (Figure 32). The Lidar , IMU...2010, July) GPS world. [Online]. http://www.gpsworld.com/tech-talk- blog/gnss-independent-navigation-solution-using-integrated- lidar -data-11378 [4...Milford, David McKinnon, Michael Warren, Gordon Wyeth, and Ben Upcroft, "Feature-based Visual Odometry and Featureless Place Recognition for SLAM in
Advanced Wireless Integrated Navy Network - AWINN
2005-09-30
progress report No. 3 on AWINN hardware and software configurations of smart , wideband, multi-function antennas, secure configurable platform, close-in...results to the host PC via a UART soft core. The UART core used is a proprietary Xilinx core which incorporates features described in National...current software uses wheel odometry and visual landmarks to create a map and estimate position on an internal x, y grid . The wheel odometry provides a
Piao, Jin-Chun; Kim, Shin-Dug
2017-01-01
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory’s considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm’s performance and ability to process ‘flight-like’ imagery formats with a ‘flight-like’ trajectory, positioning ourselves to easily process flight data from the upcoming ‘ISS Selfie’ activity and then compare the algorithm’s quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system.Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory's considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm's performance and ability to process 'flight-like' imagery formats with a 'flight-like' trajectory, positioning ourselves to easily process flight data from the upcoming 'ISS Selfie' activity and then compare the algorithm's quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system. Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Kriechbaumer, Thomas; Blackburn, Kim; Breckon, Toby P.; Hamilton, Oliver; Rivas Casado, Monica
2015-01-01
Autonomous survey vessels can increase the efficiency and availability of wide-area river environment surveying as a tool for environment protection and conservation. A key challenge is the accurate localisation of the vessel, where bank-side vegetation or urban settlement preclude the conventional use of line-of-sight global navigation satellite systems (GNSS). In this paper, we evaluate unaided visual odometry, via an on-board stereo camera rig attached to the survey vessel, as a novel, low-cost localisation strategy. Feature-based and appearance-based visual odometry algorithms are implemented on a six degrees of freedom platform operating under guided motion, but stochastic variation in yaw, pitch and roll. Evaluation is based on a 663 m-long trajectory (>15,000 image frames) and statistical error analysis against ground truth position from a target tracking tachymeter integrating electronic distance and angular measurements. The position error of the feature-based technique (mean of ±0.067 m) is three times smaller than that of the appearance-based algorithm. From multi-variable statistical regression, we are able to attribute this error to the depth of tracked features from the camera in the scene and variations in platform yaw. Our findings inform effective strategies to enhance stereo visual localisation for the specific application of river monitoring. PMID:26694411
PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features
Zhao, Ji; Guo, Yue; He, Wenhao; Yuan, Kui
2018-01-01
To address the problem of estimating camera trajectory and to build a structural three-dimensional (3D) map based on inertial measurements and visual observations, this paper proposes point–line visual–inertial odometry (PL-VIO), a tightly-coupled monocular visual–inertial odometry system exploiting both point and line features. Compared with point features, lines provide significantly more geometrical structure information on the environment. To obtain both computation simplicity and representational compactness of a 3D spatial line, Plücker coordinates and orthonormal representation for the line are employed. To tightly and efficiently fuse the information from inertial measurement units (IMUs) and visual sensors, we optimize the states by minimizing a cost function which combines the pre-integrated IMU error term together with the point and line re-projection error terms in a sliding window optimization framework. The experiments evaluated on public datasets demonstrate that the PL-VIO method that combines point and line features outperforms several state-of-the-art VIO systems which use point features only. PMID:29642648
Depth estimation and camera calibration of a focused plenoptic camera for visual odometry
NASA Astrophysics Data System (ADS)
Zeller, Niclas; Quint, Franz; Stilla, Uwe
2016-08-01
This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.
Exomars VisLoc- The Visual Localisation System for the Exomars Rover
NASA Astrophysics Data System (ADS)
Ward, R.; Hamilton, W.; Silva, N.; Pereira, V.
2016-08-01
Maintaining accurate knowledge of the current position of vehicles on the surface of Mars is a considerable problem. The lack of an orbital GPS means that the absolute position of a rover at any instant is very difficult to determine, and with that it is difficult to accurately and safely plan hazard avoidance manoeuvres.Some on-board methods of determining the evolving POSE of a rover are well known, such as using wheel odometry to keep a log of the distance travelled. However there are associated problems - wheels can slip in the martial soil providing odometry readings which can mislead navigation algorithms. One solution to this is to use a visual localisation system, which uses cameras to determine the actual rover motion from images of the terrain. By measuring movement from the terrain an independent measure of the actual movement can be obtained to a high degree of accuracy.This paper presents the progress of the project to develop a the Visual Localisation system for the ExoMars rover (VisLoc). The core algorithmm used in the system is known as OVO (Oxford Visual Odometry), developed at the Mobile Robotics Group at the University of Oxford. Over a number of projects this system has been adapted from its original purpose (navigation systems for autonomous vehicles) to be a viable system for the unique challenges associated with extra-terrestrial use.
Monocular Visual Odometry Based on Trifocal Tensor Constraint
NASA Astrophysics Data System (ADS)
Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.
2018-02-01
For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.
Low Cost Embedded Stereo System for Underwater Surveys
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.
2017-11-01
This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.
Fast instantaneous center of rotation estimation algorithm for a skied-steered robot
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2015-05-01
Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.
Attitude and position estimation on the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Ali, Khaled S.; Vanelli, C. Anthony; Biesiadecki, Jeffrey J.; Maimone, Mark W.; Yang Cheng, A.; San Martin, Miguel; Alexander, James W.
2005-01-01
NASA/JPL 's Mars Exploration Rovers acquire their attitude upon command and autonomously propagate their attitude and position. The rovers use accelerometers and images of the sun to acquire attitude, autonomously searching the sky for the sun with a pointable camera. To propagate the attitude and position the rovers use either accelerometer and gyro readings or gyro readings and wheel odometiy, depending on the nature of the movement ground operators are commanding. Where necessary, visual odometry is performed on images to fine tune the position updates, particularly in high slip environments. The capability also exists for visual odometry attitude updates. This paper describes the techniques used by the rovers to acquire and maintain attitude and position knowledge, the accuracy which is obtainable, and lessons learned after more than one year in operation.
2012-08-29
The straight lines in Curiosity zigzag track marks are Morse code for JPL. The footprint is an important reference mark that the rover can use to drive more precisely via a system called visual odometry.
Towards Guided Underwater Survey Using Light Visual Odometry
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.
2017-02-01
A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.
Spectrally queued feature selection for robotic visual odometery
NASA Astrophysics Data System (ADS)
Pirozzo, David M.; Frederick, Philip A.; Hunt, Shawn; Theisen, Bernard; Del Rose, Mike
2011-01-01
Over the last two decades, research in Unmanned Vehicles (UV) has rapidly progressed and become more influenced by the field of biological sciences. Researchers have been investigating mechanical aspects of varying species to improve UV air and ground intrinsic mobility, they have been exploring the computational aspects of the brain for the development of pattern recognition and decision algorithms and they have been exploring perception capabilities of numerous animals and insects. This paper describes a 3 month exploratory applied research effort performed at the US ARMY Research, Development and Engineering Command's (RDECOM) Tank Automotive Research, Development and Engineering Center (TARDEC) in the area of biologically inspired spectrally augmented feature selection for robotic visual odometry. The motivation for this applied research was to develop a feasibility analysis on multi-spectrally queued feature selection, with improved temporal stability, for the purposes of visual odometry. The intended application is future semi-autonomous Unmanned Ground Vehicle (UGV) control as the richness of data sets required to enable human like behavior in these systems has yet to be defined.
NASA Technical Reports Server (NTRS)
Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.
2009-01-01
The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses
Vision System Measures Motions of Robot and External Objects
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2008-01-01
A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.
NASA Astrophysics Data System (ADS)
Dimas, George; Iakovidis, Dimitris K.; Karargyris, Alexandros; Ciuti, Gastone; Koulaouzidis, Anastasios
2017-09-01
Wireless capsule endoscopy is a non-invasive screening procedure of the gastrointestinal (GI) tract performed with an ingestible capsule endoscope (CE) of the size of a large vitamin pill. Such endoscopes are equipped with a usually low-frame-rate color camera which enables the visualization of the GI lumen and the detection of pathologies. The localization of the commercially available CEs is performed in the 3D abdominal space using radio-frequency (RF) triangulation from external sensor arrays, in combination with transit time estimation. State-of-the-art approaches, such as magnetic localization, which have been experimentally proved more accurate than the RF approach, are still at an early stage. Recently, we have demonstrated that CE localization is feasible using solely visual cues and geometric models. However, such approaches depend on camera parameters, many of which are unknown. In this paper the authors propose a novel non-parametric visual odometry (VO) approach to CE localization based on a feed-forward neural network architecture. The effectiveness of this approach in comparison to state-of-the-art geometric VO approaches is validated using a robotic-assisted in vitro experimental setup.
Gps-Denied Geo-Localisation Using Visual Odometry
NASA Astrophysics Data System (ADS)
Gupta, Ashish; Chang, Huan; Yilmaz, Alper
2016-06-01
The primary method for geo-localization is based on GPS which has issues of localization accuracy, power consumption, and unavailability. This paper proposes a novel approach to geo-localization in a GPS-denied environment for a mobile platform. Our approach has two principal components: public domain transport network data available in GIS databases or OpenStreetMap; and a trajectory of a mobile platform. This trajectory is estimated using visual odometry and 3D view geometry. The transport map information is abstracted as a graph data structure, where various types of roads are modelled as graph edges and typically intersections are modelled as graph nodes. A search for the trajectory in real time in the graph yields the geo-location of the mobile platform. Our approach uses a simple visual sensor and it has a low memory and computational footprint. In this paper, we demonstrate our method for trajectory estimation and provide examples of geolocalization using public-domain map data. With the rapid proliferation of visual sensors as part of automated driving technology and continuous growth in public domain map data, our approach has the potential to completely augment, or even supplant, GPS based navigation since it functions in all environments.
Benchmarking real-time RGBD odometry for light-duty UAVs
NASA Astrophysics Data System (ADS)
Willis, Andrew R.; Sahawneh, Laith R.; Brink, Kevin M.
2016-06-01
This article describes the theoretical and implementation challenges associated with generating 3D odometry estimates (delta-pose) from RGBD sensor data in real-time to facilitate navigation in cluttered indoor environments. The underlying odometry algorithm applies to general 6DoF motion; however, the computational platforms, trajectories, and scene content are motivated by their intended use on indoor, light-duty UAVs. Discussion outlines the overall software pipeline for sensor processing and details how algorithm choices for the underlying feature detection and correspondence computation impact the real-time performance and accuracy of the estimated odometry and associated covariance. This article also explores the consistency of odometry covariance estimates and the correlation between successive odometry estimates. The analysis is intended to provide users information needed to better leverage RGBD odometry within the constraints of their systems.
Robotic Vision-Based Localization in an Urban Environment
NASA Technical Reports Server (NTRS)
Mchenry, Michael; Cheng, Yang; Matthies
2007-01-01
A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.
Robust Stereo Visual Odometry Using Improved RANSAC-Based Methods for Mobile Robot Localization
Liu, Yanqing; Gu, Yuzhang; Li, Jiamao; Zhang, Xiaolin
2017-01-01
In this paper, we present a novel approach for stereo visual odometry with robust motion estimation that is faster and more accurate than standard RANSAC (Random Sample Consensus). Our method makes improvements in RANSAC in three aspects: first, the hypotheses are preferentially generated by sampling the input feature points on the order of ages and similarities of the features; second, the evaluation of hypotheses is performed based on the SPRT (Sequential Probability Ratio Test) that makes bad hypotheses discarded very fast without verifying all the data points; third, we aggregate the three best hypotheses to get the final estimation instead of only selecting the best hypothesis. The first two aspects improve the speed of RANSAC by generating good hypotheses and discarding bad hypotheses in advance, respectively. The last aspect improves the accuracy of motion estimation. Our method was evaluated in the KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) and the New Tsukuba dataset. Experimental results show that the proposed method achieves better results for both speed and accuracy than RANSAC. PMID:29027935
Mafrica, Stefano; Servel, Alain; Ruffier, Franck
2016-11-10
Here we present a novel bio-inspired optic flow (OF) sensor and its application to visual guidance and odometry on a low-cost car-like robot called BioCarBot. The minimalistic OF sensor was robust to high-dynamic-range lighting conditions and to various visual patterns encountered thanks to its M 2 APIX auto-adaptive pixels and the new cross-correlation OF algorithm implemented. The low-cost car-like robot estimated its velocity and steering angle, and therefore its position and orientation, via an extended Kalman filter (EKF) using only two downward-facing OF sensors and the Ackerman steering model. Indoor and outdoor experiments were carried out in which the robot was driven in the closed-loop mode based on the velocity and steering angle estimates. The experimental results obtained show that our novel OF sensor can deliver high-frequency measurements ([Formula: see text]) in a wide OF range (1.5-[Formula: see text]) and in a 7-decade high-dynamic light level range. The OF resolution was constant and could be adjusted as required (up to [Formula: see text]), and the OF precision obtained was relatively high (standard deviation of [Formula: see text] with an average OF of [Formula: see text], under the most demanding lighting conditions). An EKF-based algorithm gave the robot's position and orientation with a relatively high accuracy (maximum errors outdoors at a very low light level: [Formula: see text] and [Formula: see text] over about [Formula: see text] and [Formula: see text]) despite the low-resolution control systems of the steering servo and the DC motor, as well as a simplified model identification and calibration. Finally, the minimalistic OF-based odometry results were compared to those obtained using measurements based on an inertial measurement unit (IMU) and a motor's speed sensor.
Tracking Positions and Attitudes of Mars Rovers
NASA Technical Reports Server (NTRS)
Ali, Khaled; vanelli, Charles; Biesiadecki, Jeffrey; Martin, Alejandro San; Maimone, Mark; Cheng, Yang; Alexander, James
2006-01-01
The Surface Attitude Position and Pointing (SAPP) software, which runs on computers aboard the Mars Exploration Rovers, tracks the positions and attitudes of the rovers on the surface of Mars. Each rover acquires data on attitude from a combination of accelerometer readings and images of the Sun acquired autonomously, using a pointable camera to search the sky for the Sun. Depending on the nature of movement commanded remotely by operators on Earth, the software propagates attitude and position by use of either (1) accelerometer and gyroscope readings or (2) gyroscope readings and wheel odometry. Where necessary, visual odometry is performed on images to fine-tune the position updates, particularly on high-wheel-slip terrain. The attitude data are used by other software and ground-based personnel for pointing a high-gain antenna, planning and execution of driving, and positioning and aiming scientific instruments.
Stereo and IMU-Assisted Visual Odometry for Small Robots
NASA Technical Reports Server (NTRS)
2012-01-01
This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.
Evaluation of odometry algorithm performances using a railway vehicle dynamic model
NASA Astrophysics Data System (ADS)
Allotta, B.; Pugi, L.; Ridolfi, A.; Malvezzi, M.; Vettori, G.; Rindi, A.
2012-05-01
In modern railway Automatic Train Protection and Automatic Train Control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. Simplified two-dimensional models of railway vehicles have been usually used for Hardware in the Loop test rig testing of conventional odometry algorithms and of on-board safety relevant subsystems (like the Wheel Slide Protection braking system) in which the train speed is estimated from the measures of the wheel angular speed. Two-dimensional models are not suitable to develop solutions like the inertial type localisation algorithms (using 3D accelerometers and 3D gyroscopes) and the introduction of Global Positioning System (or similar) or the magnetometer. In order to test these algorithms correctly and increase odometry performances, a three-dimensional multibody model of a railway vehicle has been developed, using Matlab-Simulink™, including an efficient contact model which can simulate degraded adhesion conditions (the development and prototyping of odometry algorithms involve the simulation of realistic environmental conditions). In this paper, the authors show how a 3D railway vehicle model, able to simulate the complex interactions arising between different on-board subsystems, can be useful to evaluate the odometry algorithm and safety relevant to on-board subsystem performances.
Acquiring Semantically Meaningful Models for Robotic Localization, Mapping and Target Recognition
2014-12-21
information, including suggesstions for reducing this burden, to Washington Headquarters Services , Directorate for Information Operations and Reports, 1215...Representations • Point features tracking • Recovery of relative motion, visual odometry • Loop closure • Environment models, sparse clouds of points...that co- occur with the object of interest Chair-Background Table-Background Object Level Segmentation Jaccard Index Silber .[5] 15.12 RenFox[4
Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor
Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio
2011-01-01
This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016
A stingless bee can use visual odometry to estimate both height and distance.
Eckles, M A; Roubik, D W; Nieh, J C
2012-09-15
Bees move and forage within three dimensions and rely heavily on vision for navigation. The use of vision-based odometry has been studied extensively in horizontal distance measurement, but not vertical distance measurement. The honey bee Apis mellifera and the stingless bee Melipona seminigra measure distance visually using optic flow-movement of images as they pass across the retina. The honey bees gauge height using image motion in the ventral visual field. The stingless bees forage at different tropical forest canopy levels, ranging up to 40 m at our site. Thus, estimating height would be advantageous. We provide the first evidence that the stingless bee Melipona panamica utilizes optic flow information to gauge not only distance traveled but also height above ground, by processing information primarily from the lateral visual field. After training bees to forage at a set height in a vertical tunnel lined with black and white stripes, we observed foragers that explored a new tunnel with no feeder. In a new tunnel, bees searched at the same height they were trained to. In a narrower tunnel, bees experienced more image motion and significantly lowered their search height. In a wider tunnel, bees experienced less image motion and searched at significantly greater heights. In a tunnel without optic cues, bees were disoriented and searched at random heights. A horizontal tunnel testing these variables similarly affected foraging, but bees exhibited less precision (greater variance in search positions). Accurately gauging flight height above ground may be crucial for this species and others that compete for resources located at heights ranging from ground level to the high tropical forest canopies.
Enhancement Strategies for Frame-To Uas Stereo Visual Odometry
NASA Astrophysics Data System (ADS)
Kersten, J.; Rodehorst, V.
2016-06-01
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.
Spirit rover localization and topographic mapping at the landing site of Gusev crater, Mars
Li, R.; Archinal, B.A.; Arvidson, R. E.; Bell, J.; Christensen, P.; Crumpler, L.; Des Marais, D.J.; Di, K.; Duxbury, T.; Golombek, M.P.; Grant, J. A.; Greeley, R.; Guinn, J.; Johnson, Aaron H.; Kirk, R.L.; Maimone, M.; Matthies, L.H.; Malin, M.; Parker, T.; Sims, M.; Thompson, S.; Squyres, S. W.; Soderblom, L.A.
2006-01-01
By sol 440, the Spirit rover has traversed a distance of 3.76 km (actual distance traveled instead of odometry). Localization of the lander and the rover along the traverse has been successfully performed at the Gusev crater landing site. We localized the lander in the Gusev crater using two-way Doppler radio positioning and cartographic triangulations through landmarks visible in both orbital and ground images. Additional high-resolution orbital images were used to verify the determined lander position. Visual odometry and bundle adjustment technologies were applied to compensate for wheel slippage, azimuthal angle drift, and other navigation errors (which were as large as 10.5% in the Husband Hill area). We generated topographic products, including 72 ortho maps and three-dimensional (3-D) digital terrain models, 11 horizontal and vertical traverse profiles, and one 3-D crater model (up to sol 440). Also discussed in this paper are uses of the data for science operations planning, geological traverse surveys, surveys of wind-related features, and other science applications. Copyright 2006 by the American Geophysical Union.
Underwater image mosaicking and visual odometry
NASA Astrophysics Data System (ADS)
Sadjadi, Firooz; Tangirala, Sekhar; Sorber, Scott
2017-05-01
This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.
NASA Astrophysics Data System (ADS)
Leroux, B.; Cali, J.; Verdun, J.; Morel, L.; He, H.
2017-08-01
Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2-3 centimeters between the control point coordinates measured and those already known.
NASA Astrophysics Data System (ADS)
Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
Colonnier, Fabien; Manecy, Augustin; Juston, Raphaël; Mallot, Hanspeter; Leitel, Robert; Floreano, Dario; Viollet, Stéphane
2015-02-25
In this study, a miniature artificial compound eye (15 mm in diameter) called the curved artificial compound eye (CurvACE) was endowed for the first time with hyperacuity, using similar micro-movements to those occurring in the fly's compound eye. A periodic micro-scanning movement of only a few degrees enables the vibrating compound eye to locate contrasting objects with a 40-fold greater resolution than that imposed by the interommatidial angle. In this study, we developed a new algorithm merging the output of 35 local processing units consisting of adjacent pairs of artificial ommatidia. The local measurements performed by each pair are processed in parallel with very few computational resources, which makes it possible to reach a high refresh rate of 500 Hz. An aerial robotic platform with two degrees of freedom equipped with the active CurvACE placed over naturally textured panels was able to assess its linear position accurately with respect to the environment thanks to its efficient gaze stabilization system. The algorithm was found to perform robustly at different light conditions as well as distance variations relative to the ground and featured small closed-loop positioning errors of the robot in the range of 45 mm. In addition, three tasks of interest were performed without having to change the algorithm: short-range odometry, visual stabilization, and tracking contrasting objects (hands) moving over a textured background.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.
Ci, Wenyan; Huang, Yingping
2016-10-17
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
Ci, Wenyan; Huang, Yingping
2016-01-01
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508
Method of mobile robot indoor navigation by artificial landmarks with use of computer vision
NASA Astrophysics Data System (ADS)
Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.
2018-05-01
The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.
Localization Methods for a Mobile Robot in Urban Environments
2004-10-04
Columbia University, Department of Computer Science, 2001. [30] R. Brown and P. Hwang , Introduction to random signals and applied Kalman filtering, 3rd...sensor. An extended Kalman filter integrates the sensor data and keeps track of the uncertainty associated with it. The second method is based on...errors+ compass/GPS errors corrected odometry pose odometry error estimates zk zk h(x)~ h(x)~ Kalman Filter zk Fig. 4. A diagram of the extended
An Outdoor Navigation Platform with a 3D Scanner and Gyro-assisted Odometry
NASA Astrophysics Data System (ADS)
Yoshida, Tomoaki; Irie, Kiyoshi; Koyanagi, Eiji; Tomono, Masahiro
This paper proposes a light-weight navigation platform that consists of gyro-assisted odometry, a 3D laser scanner and map-based localization for human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. The map-based localization is robust and computationally inexpensive by utilizing a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were conducted at the Tsukuba Challenge held in 2009 and 2010, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km courses in a fully autonomous mode multiple times.
Neural basis of forward flight control and landing in honeybees.
Ibbotson, M R; Hung, Y-S; Meffin, H; Boeddeker, N; Srinivasan, M V
2017-11-06
The impressive repertoire of honeybee visually guided behaviors, and their ability to learn has made them an important tool for elucidating the visual basis of behavior. Like other insects, bees perform optomotor course correction to optic flow, a response that is dependent on the spatial structure of the visual environment. However, bees can also distinguish the speed of image motion during forward flight and landing, as well as estimate flight distances (odometry), irrespective of the visual scene. The neural pathways underlying these abilities are unknown. Here we report on a cluster of descending neurons (DNIIIs) that are shown to have the directional tuning properties necessary for detecting image motion during forward flight and landing on vertical surfaces. They have stable firing rates during prolonged periods of stimulation and respond to a wide range of image speeds, making them suitable to detect image flow during flight behaviors. While their responses are not strictly speed tuned, the shape and amplitudes of their speed tuning functions are resistant to large changes in spatial frequency. These cells are prime candidates not only for the control of flight speed and landing, but also the basis of a neural 'front end' of the honeybee's visual odometer.
Espinosa, Felipe; Santos, Carlos; Marrón-Romera, Marta; Pizarro, Daniel; Valdés, Fernando; Dongil, Javier
2011-01-01
This paper describes a relative localization system used to achieve the navigation of a convoy of robotic units in indoor environments. This positioning system is carried out fusing two sensorial sources: (a) an odometric system and (b) a laser scanner together with artificial landmarks located on top of the units. The laser source allows one to compensate the cumulative error inherent to dead-reckoning; whereas the odometry source provides less pose uncertainty in short trajectories. A discrete Extended Kalman Filter, customized for this application, is used in order to accomplish this aim under real time constraints. Different experimental results with a convoy of Pioneer P3-DX units tracking non-linear trajectories are shown. The paper shows that a simple setup based on low cost laser range systems and robot built-in odometry sensors is able to give a high degree of robustness and accuracy to the relative localization problem of convoy units for indoor applications. PMID:22164079
Espinosa, Felipe; Santos, Carlos; Marrón-Romera, Marta; Pizarro, Daniel; Valdés, Fernando; Dongil, Javier
2011-01-01
This paper describes a relative localization system used to achieve the navigation of a convoy of robotic units in indoor environments. This positioning system is carried out fusing two sensorial sources: (a) an odometric system and (b) a laser scanner together with artificial landmarks located on top of the units. The laser source allows one to compensate the cumulative error inherent to dead-reckoning; whereas the odometry source provides less pose uncertainty in short trajectories. A discrete Extended Kalman Filter, customized for this application, is used in order to accomplish this aim under real time constraints. Different experimental results with a convoy of Pioneer P3-DX units tracking non-linear trajectories are shown. The paper shows that a simple setup based on low cost laser range systems and robot built-in odometry sensors is able to give a high degree of robustness and accuracy to the relative localization problem of convoy units for indoor applications.
An enhanced inertial navigation system based on a low-cost IMU and laser scanner
NASA Astrophysics Data System (ADS)
Kim, Hyung-Soon; Baeg, Seung-Ho; Yang, Kwang-Woong; Cho, Kuk; Park, Sangdeok
2012-06-01
This paper describes an enhanced fusion method for an Inertial Navigation System (INS) based on a 3-axis accelerometer sensor, a 3-axis gyroscope sensor and a laser scanner. In GPS-denied environments, indoor or dense forests, a pure INS odometry is available for estimating the trajectory of a human or robot. However it has a critical implementation problem: a drift error of velocity, position and heading angles. Commonly the problem can be solved by fusing visual landmarks, a magnetometer or radio beacons. These methods are not robust in diverse environments: darkness, fog or sunlight, an unstable magnetic field and an environmental obstacle. We propose to overcome the drift problem using an Iterative Closest Point (ICP) scan matching algorithm with a laser scanner. This system consists of three parts. The first is the INS. It estimates attitude, velocity, position based on a 6-axis Inertial Measurement Unit (IMU) with both 'Heuristic Reduction of Gyro Drift' (HRGD) and 'Heuristic Reduction of Velocity Drift' (HRVD) methods. A frame-to-frame ICP matching algorithm for estimating position and attitude by laser scan data is the second. The third is an extended kalman filter method for multi-sensor data fusing: INS and Laser Range Finder (LRF). The proposed method is simple and robust in diverse environments, so we could reduce the drift error efficiently. We confirm the result comparing an odometry of the experimental result with ICP and LRF aided-INS in a long corridor.
Visual Target Tracking on the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Kim, Won S.; Biesiadecki, Jeffrey J.; Ali, Khaled S.
2008-01-01
Visual Target Tracking (VTT) has been implemented in the new Mars Exploration Rover (MER) Flight Software (FSW) R9.2 release, which is now running on both Spirit and Opportunity rovers. Applying the normalized cross-correlation (NCC) algorithm with template image magnification and roll compensation on MER Navcam images, VTT tracks the target and enables the rover to approach the target within a few cm over a 10 m traverse. Each VTT update takes 1/2 to 1 minute on the rovers, 2-3 times faster than one Visual Odometry (Visodom) update. VTT is a key element to achieve a target approach and instrument placement over a 10-m run in a single sol in contrast to the original baseline of 3 sols. VTT has been integrated into the MER FSW so that it can operate with any combination of blind driving, Autonomous Navigation (Autonav) with hazard avoidance, and Visodom. VTT can either guide the rover towards the target or simply image the target as the rover drives by. Three recent VTT operational checkouts on Opportunity were all successful, tracking the selected target reliably within a few pixels.
High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.
Song, Shiyu; Chandraker, Manmohan; Guest, Clark C
2016-04-01
We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.
Using Visual Odometry to Estimate Position and Attitude
NASA Technical Reports Server (NTRS)
Maimone, Mark; Cheng, Yang; Matthies, Larry; Schoppers, Marcel; Olson, Clark
2007-01-01
A computer program in the guidance system of a mobile robot generates estimates of the position and attitude of the robot, using features of the terrain on which the robot is moving, by processing digitized images acquired by a stereoscopic pair of electronic cameras mounted rigidly on the robot. Developed for use in localizing the Mars Exploration Rover (MER) vehicles on Martian terrain, the program can also be used for similar purposes on terrestrial robots moving in sufficiently visually textured environments: examples include low-flying robotic aircraft and wheeled robots moving on rocky terrain or inside buildings. In simplified terms, the program automatically detects visual features and tracks them across stereoscopic pairs of images acquired by the cameras. The 3D locations of the tracked features are then robustly processed into an estimate of overall vehicle motion. Testing has shown that by use of this software, the error in the estimate of the position of the robot can be limited to no more than 2 percent of the distance traveled, provided that the terrain is sufficiently rich in features. This software has proven extremely useful on the MER vehicles during driving on sandy and highly sloped terrains on Mars.
Cope, Alex J; Sabo, Chelsea; Gurney, Kevin; Vasilaki, Eleni; Marshall, James A R
2016-05-01
We present a novel neurally based model for estimating angular velocity (AV) in the bee brain, capable of quantitatively reproducing experimental observations of visual odometry and corridor-centering in free-flying honeybees, including previously unaccounted for manipulations of behaviour. The model is fitted using electrophysiological data, and tested using behavioural data. Based on our model we suggest that the AV response can be considered as an evolutionary extension to the optomotor response. The detector is tested behaviourally in silico with the corridor-centering paradigm, where bees navigate down a corridor with gratings (square wave or sinusoidal) on the walls. When combined with an existing flight control algorithm the detector reproduces the invariance of the average flight path to the spatial frequency and contrast of the gratings, including deviations from perfect centering behaviour as found in the real bee's behaviour. In addition, the summed response of the detector to a unit distance movement along the corridor is constant for a large range of grating spatial frequencies, demonstrating that the detector can be used as a visual odometer.
Sabo, Chelsea; Gurney, Kevin; Vasilaki, Eleni; Marshall, James A. R.
2016-01-01
We present a novel neurally based model for estimating angular velocity (AV) in the bee brain, capable of quantitatively reproducing experimental observations of visual odometry and corridor-centering in free-flying honeybees, including previously unaccounted for manipulations of behaviour. The model is fitted using electrophysiological data, and tested using behavioural data. Based on our model we suggest that the AV response can be considered as an evolutionary extension to the optomotor response. The detector is tested behaviourally in silico with the corridor-centering paradigm, where bees navigate down a corridor with gratings (square wave or sinusoidal) on the walls. When combined with an existing flight control algorithm the detector reproduces the invariance of the average flight path to the spatial frequency and contrast of the gratings, including deviations from perfect centering behaviour as found in the real bee’s behaviour. In addition, the summed response of the detector to a unit distance movement along the corridor is constant for a large range of grating spatial frequencies, demonstrating that the detector can be used as a visual odometer. PMID:27148968
Using virtual environment for autonomous vehicle algorithm validation
NASA Astrophysics Data System (ADS)
Levinskis, Aleksandrs
2018-04-01
This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.
Rover Slip Validation and Prediction Algorithm
NASA Technical Reports Server (NTRS)
Yen, Jeng
2009-01-01
A physical-based simulation has been developed for the Mars Exploration Rover (MER) mission that applies a slope-induced wheel-slippage to the rover location estimator. Using the digital elevation map from the stereo images, the computational method resolves the quasi-dynamic equations of motion that incorporate the actual wheel-terrain speed to estimate the gross velocity of the vehicle. Based on the empirical slippage measured by the Visual Odometry software of the rover, this algorithm computes two factors for the slip model by minimizing the distance of the predicted and actual vehicle location, and then uses the model to predict the next drives. This technique, which has been deployed to operate the MER rovers in the extended mission periods, can accurately predict the rover position and attitude, mitigating the risk and uncertainties in the path planning on high-slope areas.
High-resolution hyperspectral ground mapping for robotic vision
NASA Astrophysics Data System (ADS)
Neuhaus, Frank; Fuchs, Christian; Paulus, Dietrich
2018-04-01
Recently released hyperspectral cameras use large, mosaiced filter patterns to capture different ranges of the light's spectrum in each of the camera's pixels. Spectral information is sparse, as it is not fully available in each location. We propose an online method that avoids explicit demosaicing of camera images by fusing raw, unprocessed, hyperspectral camera frames inside an ego-centric ground surface map. It is represented as a multilayer heightmap data structure, whose geometry is estimated by combining a visual odometry system with either dense 3D reconstruction or 3D laser data. We use a publicly available dataset to show that our approach is capable of constructing an accurate hyperspectral representation of the surface surrounding the vehicle. We show that in many cases our approach increases spatial resolution over a demosaicing approach, while providing the same amount of spectral information.
NASA Astrophysics Data System (ADS)
Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.
2018-04-01
Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.
Autonomous Rock Tracking and Acquisition from a Mars Rover
NASA Technical Reports Server (NTRS)
Maimone, Mark W.; Nesnas, Issa A.; Das, Hari
1999-01-01
Future Mars exploration missions will perform two types of experiments: science instrument placement for close-up measurement, and sample acquisition for return to Earth. In this paper we describe algorithms we developed for these tasks, and demonstrate them in field experiments using a self-contained Mars Rover prototype, the Rocky 7 rover. Our algorithms perform visual servoing on an elevation map instead of image features, because the latter are subject to abrupt scale changes during the approach. 'This allows us to compensate for the poor odometry that results from motion on loose terrain. We demonstrate the successful grasp of a 5 cm long rock over 1m away using 103-degree field-of-view stereo cameras, and placement of a flexible mast on a rock outcropping over 5m away using 43 degree FOV stereo cameras.
Autonomous Deep-Space Optical Navigation Project
NASA Technical Reports Server (NTRS)
D'Souza, Christopher
2014-01-01
This project will advance the Autonomous Deep-space navigation capability applied to Autonomous Rendezvous and Docking (AR&D) Guidance, Navigation and Control (GNC) system by testing it on hardware, particularly in a flight processor, with a goal of limited testing in the Integrated Power, Avionics and Software (IPAS) with the ARCM (Asteroid Retrieval Crewed Mission) DRO (Distant Retrograde Orbit) Autonomous Rendezvous and Docking (AR&D) scenario. The technology, which will be harnessed, is called 'optical flow', also known as 'visual odometry'. It is being matured in the automotive and SLAM (Simultaneous Localization and Mapping) applications but has yet to be applied to spacecraft navigation. In light of the tremendous potential of this technique, we believe that NASA needs to design a optical navigation architecture that will use this technique. It is flexible enough to be applicable to navigating around planetary bodies, such as asteroids.
Intelligent visual localization of wireless capsule endoscopes enhanced by color information.
Dimas, George; Spyrou, Evaggelos; Iakovidis, Dimitris K; Koulaouzidis, Anastasios
2017-10-01
Wireless capsule endoscopy (WCE) is performed with a miniature swallowable endoscope enabling the visualization of the whole gastrointestinal (GI) tract. One of the most challenging problems in WCE is the localization of the capsule endoscope (CE) within the GI lumen. Contemporary, radiation-free localization approaches are mainly based on the use of external sensors and transit time estimation techniques, with practically low localization accuracy. Latest advances for the solution of this problem include localization approaches based solely on visual information from the CE camera. In this paper we present a novel visual localization approach based on an intelligent, artificial neural network, architecture which implements a generic visual odometry (VO) framework capable of estimating the motion of the CE in physical units. Unlike the conventional, geometric, VO approaches, the proposed one is adaptive to the geometric model of the CE used; therefore, it does not require any prior knowledge about and its intrinsic parameters. Furthermore, it exploits color as a cue to increase localization accuracy and robustness. Experiments were performed using a robotic-assisted setup providing ground truth information about the actual location of the CE. The lowest average localization error achieved is 2.70 ± 1.62 cm, which is significantly lower than the error obtained with the geometric approach. This result constitutes a promising step towards the in-vivo application of VO, which will open new horizons for accurate local treatment, including drug infusion and surgical interventions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry
2011-01-01
ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm.
Moreno, Javier; Clotet, Eduard; Lupiañez, Ruben; Tresanchez, Marcel; Martínez, Dani; Pallejà, Tomàs; Casanovas, Jordi; Palacín, Jordi
2016-10-10
This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm.
Moreno, Javier; Clotet, Eduard; Lupiañez, Ruben; Tresanchez, Marcel; Martínez, Dani; Pallejà, Tomàs; Casanovas, Jordi; Palacín, Jordi
2016-01-01
This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm. PMID:27735857
Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry
NASA Technical Reports Server (NTRS)
Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)
2016-01-01
A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.
FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.
Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu
2017-07-18
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.
Honeybee Odometry: Performance in Varying Natural Terrain
Tautz, Juergen; Zhang, Shaowu; Spaethe, Johannes; Brockmann, Axel; Si, Aung
2004-01-01
Recent studies have shown that honeybees flying through short, narrow tunnels with visually textured walls perform waggle dances that indicate a much greater flight distance than that actually flown. These studies suggest that the bee's “odometer” is driven by the optic flow (image motion) that is experienced during flight. One might therefore expect that, when bees fly to a food source through a varying outdoor landscape, their waggle dances would depend upon the nature of the terrain experienced en route. We trained honeybees to visit feeders positioned along two routes, each 580 m long. One route was exclusively over land. The other was initially over land, then over water and, finally, again over land. Flight over water resulted in a significantly flatter slope of the waggle-duration versus distance regression, compared to flight over land. The mean visual contrast of the scenes was significantly greater over land than over water. The results reveal that, in outdoor flight, the honeybee's odometer does not run at a constant rate; rather, the rate depends upon the properties of the terrain. The bee's perception of distance flown is therefore not absolute, but scene-dependent. These findings raise important and interesting questions about how these animals navigate reliably. PMID:15252454
Piecewise-Planar StereoScan: Sequential Structure and Motion using Plane Primitives.
Raposo, Carolina; Antunes, Michel; P Barreto, Joao
2017-08-09
The article describes a pipeline that receives as input a sequence of stereo images, and outputs the camera motion and a Piecewise-Planar Reconstruction (PPR) of the scene. The pipeline, named Piecewise-Planar StereoScan (PPSS), works as follows: the planes in the scene are detected for each stereo view using semi-dense depth estimation; the relative pose is computed by a new closed-form minimal algorithm that only uses point correspondences whenever plane detections do not fully constrain the motion; the camera motion and the PPR are jointly refined by alternating between discrete optimization and continuous bundle adjustment; and, finally, the detected 3D planes are segmented in images using a new framework that handles low texture and visibility issues. PPSS is extensively validated in indoor and outdoor datasets, and benchmarked against two popular point-based SfM pipelines. The experiments confirm that plane-based visual odometry is resilient to situations of small image overlap, poor texture, specularity, and perceptual aliasing where the fast LIBVISO2 pipeline fails. The comparison against VisualSfM+CMVS/PMVS shows that, for a similar computational complexity, PPSS is more accurate and provides much more compelling and visually pleasant 3D models. These results strongly suggest that plane primitives are an advantageous alternative to point correspondences for applications of SfM and 3D reconstruction in man-made environments.
Localization of Mobile Robots Using Odometry and an External Vision Sensor
Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina
2010-01-01
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields. PMID:22319318
Localization of mobile robots using odometry and an external vision sensor.
Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina
2010-01-01
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields.
Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.
Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu
2015-08-01
This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.
PRoViScout: a planetary scouting rover demonstrator
NASA Astrophysics Data System (ADS)
Paar, Gerhard; Woods, Mark; Gimkiewicz, Christiane; Labrosse, Frédéric; Medina, Alberto; Tyler, Laurence; Barnes, David P.; Fritz, Gerald; Kapellos, Konstantinos
2012-01-01
Mobile systems exploring Planetary surfaces in future will require more autonomy than today. The EU FP7-SPACE Project ProViScout (2010-2012) establishes the building blocks of such autonomous exploration systems in terms of robotics vision by a decision-based combination of navigation and scientific target selection, and integrates them into a framework ready for and exposed to field demonstration. The PRoViScout on-board system consists of mission management components such as an Executive, a Mars Mission On-Board Planner and Scheduler, a Science Assessment Module, and Navigation & Vision Processing modules. The platform hardware consists of the rover with the sensors and pointing devices. We report on the major building blocks and their functions & interfaces, emphasizing on the computer vision parts such as image acquisition (using a novel zoomed 3D-Time-of-Flight & RGB camera), mapping from 3D-TOF data, panoramic image & stereo reconstruction, hazard and slope maps, visual odometry and the recognition of potential scientifically interesting targets.
Validation of Underwater Sensor Package Using Feature Based SLAM
Cain, Christopher; Leonessa, Alexander
2016-01-01
Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package. PMID:26999142
Precise visual navigation using multi-stereo vision and landmark matching
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh
2007-04-01
Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.
Allothetic and idiothetic sensor fusion in rat-inspired robot localization
NASA Astrophysics Data System (ADS)
Weitzenfeld, Alfredo; Fellous, Jean-Marc; Barrera, Alejandra; Tejera, Gonzalo
2012-06-01
We describe a spatial cognition model based on the rat's brain neurophysiology as a basis for new robotic navigation architectures. The model integrates allothetic (external visual landmarks) and idiothetic (internal kinesthetic information) cues to train either rat or robot to learn a path enabling it to reach a goal from multiple starting positions. It stands in contrast to most robotic architectures based on SLAM, where a map of the environment is built to provide probabilistic localization information computed from robot odometry and landmark perception. Allothetic cues suffer in general from perceptual ambiguity when trying to distinguish between places with equivalent visual patterns, while idiothetic cues suffer from imprecise motions and limited memory recalls. We experiment with both types of cues in different maze configurations by training rats and robots to find the goal starting from a fixed location, and then testing them to reach the same target from new starting locations. We show that the robot, after having pre-explored a maze, can find a goal with improved efficiency, and is able to (1) learn the correct route to reach the goal, (2) recognize places already visited, and (3) exploit allothetic and idiothetic cues to improve on its performance. We finally contrast our biologically-inspired approach to more traditional robotic approaches and discuss current work in progress.
Position estimation and driving of an autonomous vehicle by monocular vision
NASA Astrophysics Data System (ADS)
Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.
2007-04-01
Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.
Towards automated visual flexible endoscope navigation.
van der Stap, Nanda; van der Heijden, Ferdinand; Broeders, Ivo A M J
2013-10-01
The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research. A systematic literature search was performed using three general search terms in two medical-technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included. Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date. Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process.
Trajectories for Locomotion Systems: A Geometric and Computational Approach via Series Expansions
2004-10-11
speed controller. The model is endowed with a 100 count per revolution optical encoder for odometry. (2) On-board computation is performed by a single...switching networks,” Automatica, July 2003. Submitted. [17] K. M. Passino, Biomimicry for Optimization, Control, and Automation. New York: Springer
High-Performance 3D Articulated Robot Display
NASA Technical Reports Server (NTRS)
Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy
2011-01-01
In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle over or on the terrain correctly. For long traverses over terrain, the visualization can stream in terrain piecewise in order to maintain the current area of interest for the operator without incurring unreasonable resource constraints on the computing platform. The visualization software is designed to run on laptops that can operate in field-testing environments without Internet access, which is a frequently encountered situation when testing in remote locations that simulate planetary environments such as Mars and other planetary bodies.
Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles.
Xing, Boyang; Zhu, Quanmin; Pan, Feng; Feng, Xiaoxue
2018-05-25
A novel multi-sensor fusion indoor localization algorithm based on ArUco marker is designed in this paper. The proposed ArUco mapping algorithm can build and correct the map of markers online with Grubbs criterion and K-mean clustering, which avoids the map distortion due to lack of correction. Based on the conception of multi-sensor information fusion, the federated Kalman filter is utilized to synthesize the multi-source information from markers, optical flow, ultrasonic and the inertial sensor, which can obtain a continuous localization result and effectively reduce the position drift due to the long-term loss of markers in pure marker localization. The proposed algorithm can be easily implemented in a hardware of one Raspberry Pi Zero and two STM32 micro controllers produced by STMicroelectronics (Geneva, Switzerland). Thus, a small-size and low-cost marker-based localization system is presented. The experimental results show that the speed estimation result of the proposed system is better than Px4flow, and it has the centimeter accuracy of mapping and positioning. The presented system not only gives satisfying localization precision, but also has the potential to expand other sensors (such as visual odometry, ultra wideband (UWB) beacon and lidar) to further improve the localization performance. The proposed system can be reliably employed in Micro Aerial Vehicle (MAV) visual localization and robotics control.
Human Odometry Verifies the Symmetry Perspective on Bipedal Gaits
ERIC Educational Resources Information Center
Turvey, M. T.; Harrison, Steven J.; Frank, Till D.; Carello, Claudia
2012-01-01
Bipedal gaits have been classified on the basis of the group symmetry of the minimal network of identical differential equations (alias "cells") required to model them. Primary gaits are characterized by dihedral symmetry, whereas secondary gaits are characterized by a lower, cyclic symmetry. This fact was used in a test of human…
Estimation and Control for Autonomous Coring from a Rover Manipulator
NASA Technical Reports Server (NTRS)
Hudson, Nicolas; Backes, Paul; DiCicco, Matt; Bajracharya, Max
2010-01-01
A system consisting of a set of estimators and autonomous behaviors has been developed which allows robust coring from a low-mass rover platform, while accommodating for moderate rover slip. A redundant set of sensors, including a force-torque sensor, visual odometry, and accelerometers are used to monitor discrete critical and operational modes, as well as to estimate continuous drill parameters during the coring process. A set of critical failure modes pertinent to shallow coring from a mobile platform is defined, and autonomous behaviors associated with each critical mode are used to maintain nominal coring conditions. Autonomous shallow coring is demonstrated from a low-mass rover using a rotary-percussive coring tool mounted on a 5 degree-of-freedom (DOF) arm. A new architecture of using an arm-stabilized, rotary percussive tool with the robotic arm used to provide the drill z-axis linear feed is validated. Particular attention to hole start using this architecture is addressed. An end-to-end coring sequence is demonstrated, where the rover autonomously detects and then recovers from a series of slip events that exceeded 9 cm total displacement.
NASA Astrophysics Data System (ADS)
Müller, M. S.; Urban, S.; Jutzi, B.
2017-08-01
The number of unmanned aerial vehicles (UAVs) is increasing since low-cost airborne systems are available for a wide range of users. The outdoor navigation of such vehicles is mostly based on global navigation satellite system (GNSS) methods to gain the vehicles trajectory. The drawback of satellite-based navigation are failures caused by occlusions and multi-path interferences. Beside this, local image-based solutions like Simultaneous Localization and Mapping (SLAM) and Visual Odometry (VO) can e.g. be used to support the GNSS solution by closing trajectory gaps but are computationally expensive. However, if the trajectory estimation is interrupted or not available a re-localization is mandatory. In this paper we will provide a novel method for a GNSS-free and fast image-based pose regression in a known area by utilizing a small convolutional neural network (CNN). With on-board processing in mind, we employ a lightweight CNN called SqueezeNet and use transfer learning to adapt the network to pose regression. Our experiments show promising results for GNSS-free and fast localization.
Going with the flow: a brief history of the study of the honeybee's navigational 'odometer'.
Srinivasan, Mandyam V
2014-06-01
Honeybees navigate to a food source using a sky-based compass to determine their travel direction, and an odometer to register how far they have travelled. The past 20 years have seen a renewed interest in understanding the nature of the odometer. Early work, pioneered by von Frisch and colleagues, hypothesized that travel distance is measured in terms of the energy that is consumed during the journey. More recent studies suggest that visual cues play a role as well. Specifically, bees appear to gauge travel distance by sensing the extent to which the image of the environment moves in the eye during the journey from the hive to the food source. Most of the evidence indicates that travel distance is measured during the outbound journey. Accumulation of odometric errors is restricted by resetting the odometer every time a prominent landmark is passed. When making detours around large obstacles, the odometer registers the total distance of the path that is flown to the destination, and not the "bee-line" distance. Finally, recent studies are revealing that bees can perform odometry in three dimensions.
Shamwell, E Jared; Nothwang, William D; Perlis, Donald
2018-05-04
Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.
An innovative localisation algorithm for railway vehicles
NASA Astrophysics Data System (ADS)
Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.
2014-11-01
In modern railway automatic train protection and automatic train control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. The aim of this work has been developing an innovative localisation algorithm for railway vehicles able to enhance the performances, in terms of speed and position estimation accuracy, of the classical odometry algorithms, such as the Italian Sistema Controllo Marcia Treno (SCMT). The proposed strategy consists of a sensor fusion between the information coming from a tachometer and an Inertial Measurements Unit (IMU). The sensor outputs have been simulated through a 3D multibody model of a railway vehicle. The work has provided the development of a custom IMU, designed by ECM S.p.a, in order to meet their industrial and business requirements. The industrial requirements have to be compliant with the European Train Control System (ETCS) standards: the European Rail Traffic Management System (ERTMS), a project developed by the European Union to improve the interoperability among different countries, in particular as regards the train control and command systems, fixes some standard values for the odometric (ODO) performance, in terms of speed and travelled distance estimation. The reliability of the ODO estimation has to be taken into account basing on the allowed speed profiles. The results of the currently used ODO algorithms can be improved, especially in case of degraded adhesion conditions; it has been verified in the simulation environment that the results of the proposed localisation algorithm are always compliant with the ERTMS requirements. The estimation strategy has good performance also under degraded adhesion conditions and could be put on board of high-speed railway vehicles; it represents an accurate and reliable solution. The IMU board is tested via a dedicated Hardware in the Loop (HIL) test rig: it includes an industrial robot able to replicate the motion of the railway vehicle. Through the generated experimental outputs the performances of the innovative localisation algorithm have been evaluated: the HIL test rig permitted to test the proposed algorithm, avoiding expensive (in terms of time and cost) on-track tests, obtaining encouraging results. In fact, the preliminary results show a significant improvement of the position and speed estimation performances compared to those obtained with SCMT algorithms, currently in use on the Italian railway network.
Control of a Quadcopter Aerial Robot Using Optic Flow Sensing
NASA Astrophysics Data System (ADS)
Hurd, Michael Brandon
This thesis focuses on the motion control of a custom-built quadcopter aerial robot using optic flow sensing. Optic flow sensing is a vision-based approach that can provide a robot the ability to fly in global positioning system (GPS) denied environments, such as indoor environments. In this work, optic flow sensors are used to stabilize the motion of quadcopter robot, where an optic flow algorithm is applied to provide odometry measurements to the quadcopter's central processing unit to monitor the flight heading. The optic-flow sensor and algorithm are capable of gathering and processing the images at 250 frames/sec, and the sensor package weighs 2.5 g and has a footprint of 6 cm2 in area. The odometry value from the optic flow sensor is then used a feedback information in a simple proportional-integral-derivative (PID) controller on the quadcopter. Experimental results are presented to demonstrate the effectiveness of using optic flow for controlling the motion of the quadcopter aerial robot. The technique presented herein can be applied to different types of aerial robotic systems or unmanned aerial vehicles (UAVs), as well as unmanned ground vehicles (UGV).
Occupancy Grid Map Merging Using Feature Maps
2010-11-01
each robot begins exploring at different starting points, once two robots can communicate, they send their odometry data, LIDAR observations, and maps...robots [11]. Moreover, it is relevant to mention that significant success has been achieved in solving SLAM problems when using hybrid maps [12...represents the environment by parametric features. Our method is capable of representing a LIDAR scanned environment map in a parametric fashion. In general
Multi-Target Single Cycle Instrument Placement
NASA Technical Reports Server (NTRS)
Pedersen, Liam; Smith, David E.; Deans, Matthew; Sargent, Randy; Kunz, Clay; Lees, David; Rajagopalan, Srikanth; Bualat, Maria
2005-01-01
This presentation is about the robotic exploration of Mars using multiple targets command cycle, safe instrument placements, safe operation, and K9 Rover which has a 6 wheel steer rocket-bogey chassis (FIDO, MER), 70% MER size, 1.2 GHz Pentium M laptop running Linux OS, Odometry and compass/inclinometer, CLARAty architecture, 5 DOF manipulator w/CHAMP microscopic camera, SciCams, NavCams and HazCams.
Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission
NASA Technical Reports Server (NTRS)
Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.
2004-01-01
In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.
PointCom: semi-autonomous UGV control with intuitive interface
NASA Astrophysics Data System (ADS)
Rohde, Mitchell M.; Perlin, Victor E.; Iagnemma, Karl D.; Lupa, Robert M.; Rohde, Steven M.; Overholt, James; Fiorani, Graham
2008-04-01
Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI) techniques from the entertainment software industry are being used to develop video-game style interfaces that require little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less burdensome than many current generation systems. mand).
2D/3D Visual Tracker for Rover Mast
NASA Technical Reports Server (NTRS)
Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria
2006-01-01
A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.
A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.
Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin
2015-03-24
This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning.
2014-04-24
tim at io n Er ro r ( cm ) 0 2 4 6 8 10 Color Statistics Angelova...Color_Statistics_Error) / Average_Slip_Error Position Estimation Error: Global Pose Po si tio n Es tim at io n Er ro r ( cm ) 0 2 4 6 8 10 12 Color...get some kind of clearance for releasing pose and odometry data) collected at the following sites – Taylor, Gascola, Somerset, Fort Bliss and
A Floor-Map-Aided WiFi/Pseudo-Odometry Integration Algorithm for an Indoor Positioning System
Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin
2015-01-01
This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The “go and back” phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The “cross-wall” problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224
Robust Parallel Motion Estimation and Mapping with Stereo Cameras in Underground Infrastructure
NASA Astrophysics Data System (ADS)
Liu, Chun; Li, Zhengning; Zhou, Yuan
2016-06-01
Presently, we developed a novel robust motion estimation method for localization and mapping in underground infrastructure using a pre-calibrated rigid stereo camera rig. Localization and mapping in underground infrastructure is important to safety. Yet it's also nontrivial since most underground infrastructures have poor lighting condition and featureless structure. Overcoming these difficulties, we discovered that parallel system is more efficient than the EKF-based SLAM approach since parallel system divides motion estimation and 3D mapping tasks into separate threads, eliminating data-association problem which is quite an issue in SLAM. Moreover, the motion estimation thread takes the advantage of state-of-art robust visual odometry algorithm which is highly functional under low illumination and provides accurate pose information. We designed and built an unmanned vehicle and used the vehicle to collect a dataset in an underground garage. The parallel system was evaluated by the actual dataset. Motion estimation results indicated a relative position error of 0.3%, and 3D mapping results showed a mean position error of 13cm. Off-line process reduced position error to 2cm. Performance evaluation by actual dataset showed that our system is capable of robust motion estimation and accurate 3D mapping in poor illumination and featureless underground environment.
An amodal shared resource model of language-mediated visual attention
Smith, Alastair C.; Monaghan, Padraic; Huettig, Falk
2013-01-01
Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. PMID:23966967
NASA Technical Reports Server (NTRS)
2014-01-01
Topics include: Data Fusion for Global Estimation of Forest Characteristics From Sparse Lidar Data; Debris and Ice Mapping Analysis Tool - Database; Data Acquisition and Processing Software - DAPS; Metal-Assisted Fabrication of Biodegradable Porous Silicon Nanostructures; Post-Growth, In Situ Adhesion of Carbon Nanotubes to a Substrate for Robust CNT Cathodes; Integrated PEMFC Flow Field Design for Gravity-Independent Passive Water Removal; Thermal Mechanical Preparation of Glass Spheres; Mechanistic-Based Multiaxial-Stochastic-Strength Model for Transversely-Isotropic Brittle Materials; Methods for Mitigating Space Radiation Effects, Fault Detection and Correction, and Processing Sensor Data; Compact Ka-Band Antenna Feed with Double Circularly Polarized Capability; Dual-Leadframe Transient Liquid Phase Bonded Power Semiconductor Module Assembly and Bonding Process; Quad First Stage Processor: A Four-Channel Digitizer and Digital Beam-Forming Processor; Protective Sleeve for a Pyrotechnic Reefing Line Cutter; Metabolic Heat Regenerated Temperature Swing Adsorption; CubeSat Deployable Log Periodic Dipole Array; Re-entry Vehicle Shape for Enhanced Performance; NanoRacks-Scale MEMS Gas Chromatograph System; Variable Camber Aerodynamic Control Surfaces and Active Wing Shaping Control; Spacecraft Line-of-Sight Stabilization Using LWIR Earth Signature; Technique for Finding Retro-Reflectors in Flash LIDAR Imagery; Novel Hemispherical Dynamic Camera for EVAs; 360 deg Visual Detection and Object Tracking on an Autonomous Surface Vehicle; Simulation of Charge Carrier Mobility in Conducting Polymers; Observational Data Formatter Using CMOR for CMIP5; Propellant Loading Physics Model for Fault Detection Isolation and Recovery; Probabilistic Guidance for Swarms of Autonomous Agents; Reducing Drift in Stereo Visual Odometry; Future Air-Traffic Management Concepts Evaluation Tool; Examination and A Priori Analysis of a Direct Numerical Simulation Database for High-Pressure Turbulent Flows; and Resource-Constrained Application of Support Vector Machines to Imagery.
Performance Analysis and Odometry Improvement of an Omnidirectional Mobile Robot for Outdoor Terrain
2011-09-01
sin cos , , = += = iD v v i i b yi xi i ξ ξ φ xx (1) where ix and bx are the planer velocity vectors at the i-th...variables given by an operator. The wheel angular velocities, ωi,L and ωi,R, that yield the desired i-th ASOC planer velocity are formulated as follows
Visual Tools as Mediational Means: A Methodological Investigation
ERIC Educational Resources Information Center
Hilppö, Jaakko; Lipponen, Lasse; Kumpulainen, Kristiina; Rajala, Antti
2017-01-01
In this study, we investigated how Finnish children used photographs and drawings to discuss their preschool day experiences in focus groups. Building on sociocultural perspectives on mediated action, we specifically focused on how these visual tools were used as mediational means in sharing experiences. The results of our embodied interaction…
Verbal Mediation and Memory for Novel Figural Designs: A Dual Interference Study
ERIC Educational Resources Information Center
Silverberg, N.; Buchanan, L.
2005-01-01
To the extent that all types of visual stimuli can be verbalized to some degree, verbal mediation is intrinsic in so-called ''visual'' memory processing. This impurity complicates the interpretation of visual memory performance, particularly in certain neurologically impaired populations (e.g., aphasia). The purpose of this study was to…
Effects of Peer-Mediated Implementation of Visual Scripts in Middle School
ERIC Educational Resources Information Center
Ganz, Jennifer B.; Heath, Amy K.; Lund, Emily M.; Camargo, Siglia P. H.; Rispoli, Mandy J.; Boles, Margot; Plaisance, Lauren
2012-01-01
Although research has investigated the impact of peer-mediated interventions and visual scripts on social and communication skills in children with autism spectrum disorders, no studies to date have investigated peer-mediated implementation of scripts. This study investigated the effects of peer-implemented scripts on a middle school student with…
The two-visual-systems hypothesis and the perspectival features of visual experience.
Foley, Robert T; Whitwell, Robert L; Goodale, Melvyn A
2015-09-01
Some critics of the two-visual-systems hypothesis (TVSH) argue that it is incompatible with the fundamentally egocentric nature of visual experience (what we call the 'perspectival account'). The TVSH proposes that the ventral stream, which delivers up our visual experience of the world, works in an allocentric frame of reference, whereas the dorsal stream, which mediates the visual control of action, uses egocentric frames of reference. Given that the TVSH is also committed to the claim that dorsal-stream processing does not contribute to the contents of visual experience, it has been argued that the TVSH cannot account for the egocentric features of our visual experience. This argument, however, rests on a misunderstanding about how the operations mediating action and the operations mediating perception are specified in the TVSH. In this article, we emphasize the importance of the 'outputs' of the two-systems to the specification of their respective operations. We argue that once this point is appreciated, it becomes evident that the TVSH is entirely compatible with a perspectival account of visual experience. Copyright © 2015 Elsevier Inc. All rights reserved.
UAV Research at NASA Langley: Towards Safe, Reliable, and Autonomous Operations
NASA Technical Reports Server (NTRS)
Davila, Carlos G.
2016-01-01
Unmanned Aerial Vehicles (UAV) are fundamental components in several aspects of research at NASA Langley, such as flight dynamics, mission-driven airframe design, airspace integration demonstrations, atmospheric science projects, and more. In particular, NASA Langley Research Center (Langley) is using UAVs to develop and demonstrate innovative capabilities that meet the autonomy and robotics challenges that are anticipated in science, space exploration, and aeronautics. These capabilities will enable new NASA missions such as asteroid rendezvous and retrieval (ARRM), Mars exploration, in-situ resource utilization (ISRU), pollution measurements in historically inaccessible areas, and the integration of UAVs into our everyday lives all missions of increasing complexity, distance, pace, and/or accessibility. Building on decades of NASA experience and success in the design, fabrication, and integration of robust and reliable automated systems for space and aeronautics, Langley Autonomy Incubator seeks to bridge the gap between automation and autonomy by enabling safe autonomous operations via onboard sensing and perception systems in both data-rich and data-deprived environments. The Autonomy Incubator is focused on the challenge of mobility and manipulation in dynamic and unstructured environments by integrating technologies such as computer vision, visual odometry, real-time mapping, path planning, object detection and avoidance, object classification, adaptive control, sensor fusion, machine learning, and natural human-machine teaming. These technologies are implemented in an architectural framework developed in-house for easy integration and interoperability of cutting-edge hardware and software.
Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo
2015-01-01
Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
The Effects of the Strength and Number of Visual Mediators in the Learning Process. Final Report.
ERIC Educational Resources Information Center
Bolz, Charles R.; Ackerman, Jerrold
The role of visual imagery in the learning of letter-sound combinations was investigated using such mediating images as two scoops of ice cream for the letter "m." In a preliminary study, high-, medium-, and low-strength mediating images were determined for each letter-sound combination. The 216 kindergarten subjects in the main study were…
ERIC Educational Resources Information Center
Kaplan, Peter S.; Werner, John S.
1986-01-01
Tests infants' dual-process performance (a process mediating response decrements called habituation and a state-dependent process mediating response increments called sensitization) on visual habituation-dishabituation tasks. (HOD)
Thomassen, Gøril
2016-01-01
This article highlights interpreter-mediated learning situations for deaf high school students where such mediated artifacts as technical machines, models, and computer graphics are used by the teacher to illustrate his or her teaching. In these situations, the teacher’s situated gestures and utterances, and the artifacts will contribute independent pieces of information. However, the deaf student can only have his or her visual attention focused on one source at a time. The problem to be addressed is how the interpreter coordinates the mediation when it comes to deaf students’ visual orientation. The presented discourse analysis is based on authentic video recordings from inclusive learning situations in Norway. The theoretical framework consists of concepts of role, footing, and face-work (Goffman, E. (1959). The presentation of self in everyday life. London, UK: Penguin Books). The findings point out dialogical impediments to visual access in interpreter-mediated learning situations, and the article discusses the roles and responsibilities of teachers and educational interpreters. PMID:26681267
ERIC Educational Resources Information Center
Lee, Soon Min; Oh, Yunjin
2017-01-01
Introduction: This study examined a mediator role of perceived stress on the prediction of the effects of academic stress on depressive symptoms among e-learning students with visual impairments. Methods: A convenience sample for this study was collected for three weeks from November to December in 2012 among students with visual impairments…
ERIC Educational Resources Information Center
Ackerman, Jerrold
The role of visual imagery in the learning of letter-sound combinations was investigated using such mediating images as two scoops of ice cream for the letter "m." In a preliminary study, high-, medium-, and low-strength mediating images were determined for each letter-sound combination. The 216 kindergarten subjects in the main study were…
Speed Consistency in the Smart Tachograph.
Borio, Daniele; Cano, Eduardo; Baldini, Gianmarco
2018-05-16
In the transportation sector, safety risks can be significantly reduced by monitoring the behaviour of drivers and by discouraging possible misconducts that entail fatigue and can increase the possibility of accidents. The Smart Tachograph (ST), the new revision of the Digital Tachograph (DT), has been designed with this purpose: to verify that speed limits and compulsory rest periods are respected by drivers. In order to operate properly, the ST periodically checks the consistency of data from different sensors, which can be potentially manipulated to avoid the monitoring of the driver behaviour. In this respect, the ST regulation specifies a test procedure to detect motion conflicts originating from inconsistencies between Global Navigation Satellite System (GNSS) and odometry data. This paper provides an experimental evaluation of the speed verification procedure specified by the ST regulation. Several hours of data were collected using three vehicles and considering light urban and highway environments. The vehicles were equipped with an On-Board Diagnostics (OBD) data reader and a GPS/Galileo receiver. The tests prescribed by the regulation were implemented with specific focus on synchronization aspects. The experimental analysis also considered aspects such as the impact of tunnels and the presence of data gaps. The analysis shows that the metrics selected for the tests are resilient to data gaps, latencies between GNSS and odometry data and simplistic manipulations such as data scaling. The new ST forces an attacker to falsify data from both sensors at the same time and in a coherent way. This makes more difficult the implementation of frauds in comparison to the current version of the DT.
Jensen, Jakob D; King, Andy J; Carcioppolo, Nicholas; Davis, LaShara
2012-10-01
Past research has found that tailoring increases the persuasive effectiveness of a message. However, the observed effect has been small and the explanatory mechanism remains unknown. To address these shortcomings, a tailoring software program was created that personalized breast cancer screening pamphlets according to risk, health belief model constructs, and visual preference. Women aged 40 and older ( N = 119) participated in a 2 (tailored vs. stock message) × 2 (charts/graphs vs. illustrated visuals) × 3 (nested replications of the visuals) experiment. Participants provided with tailored illustrated pamphlets expressed greater breast cancer screening intentions than those provided with other pamphlets. In a test of 10 different mediators, perceived message relevance was found to fully mediate the tailoring × visual interaction.
A New Approach to Dissect Nuclear Organization: TALE-Mediated Genome Visualization (TGV).
Miyanari, Yusuke
2016-01-01
Spatiotemporal organization of chromatin within the nucleus has so far remained elusive. Live visualization of nuclear remodeling could be a promising approach to understand its functional relevance in genome functions and mechanisms regulating genome architecture. Recent technological advances in live imaging of chromosomes begun to explore the biological roles of the movement of the chromatin within the nucleus. Here I describe a new technique, called TALE-mediated genome visualization (TGV), which allows us to visualize endogenous repetitive sequence including centromeric, pericentromeric, and telomeric repeats in living cells.
Berge, Sigrid Slettebakk; Thomassen, Gøril
2016-04-01
This article highlights interpreter-mediated learning situations for deaf high school students where such mediated artifacts as technical machines, models, and computer graphics are used by the teacher to illustrate his or her teaching. In these situations, the teacher's situated gestures and utterances, and the artifacts will contribute independent pieces of information. However, the deaf student can only have his or her visual attention focused on one source at a time. The problem to be addressed is how the interpreter coordinates the mediation when it comes to deaf students' visual orientation. The presented discourse analysis is based on authentic video recordings from inclusive learning situations in Norway. The theoretical framework consists of concepts of role, footing, and face-work (Goffman, E. (1959). The presentation of self in everyday life. London, UK: Penguin Books). The findings point out dialogical impediments to visual access in interpreter-mediated learning situations, and the article discusses the roles and responsibilities of teachers and educational interpreters. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Traffic Signs in Complex Visual Environments
DOT National Transportation Integrated Search
1982-11-01
The effects of sign luminance on detection and recognition of traffic control devices is mediated through contrast with the immediate surround. Additionally, complex visual scenes are known to degrade visual performance with targets well above visual...
Verbal and Visual Memory Impairments in Bipolar I and II Disorder.
Ha, Tae Hyon; Kim, Ji Sun; Chang, Jae Seung; Oh, Sung Hee; Her, Ju Young; Cho, Hyun Sang; Park, Tae Sung; Shin, Soon Young; Ha, Kyooseob
2012-12-01
To compare verbal and visual memory performances between patients with bipolar I disorder (BD I) and patients with bipolar II disorder (BD II) and to determine whether memory deficits were mediated by impaired organizational strategies. Performances on the Korean-California Verbal Learning Test (K-CVLT) and the Rey-Osterrieth Complex Figure Test (ROCF) in 37 patients with BD I, 46 patients with BD II and 42 healthy subjects were compared. Mediating effects of impaired organization strategies on poor delayed recall was tested by comparing direct and mediated models using multiple regression analysis. Both patients groups recalled fewer words and figure components and showed lower Semantic Clustering compared to controls. Verbal memory impairment was partly mediated by difficulties in Semantic Clustering in both subtypes, whereas the mediating effect of Organization deficit on the visual memory impairment was present only in BD I. In all mediated models, group differences in delayed recall remained significant. Our findings suggest that memory impairment may be one of the fundamental cognitive deficits in bipolar disorders and that executive dysfunctions can exert an additional influence on memory impairments.
Predictors of Verb-Mediated Anticipatory Eye Movements in the Visual World
ERIC Educational Resources Information Center
Hintz, Florian; Meyer, Antje S.; Huettig, Falk
2017-01-01
Many studies have demonstrated that listeners use information extracted from verbs to guide anticipatory eye movements to objects in the visual context that satisfy the selection restrictions of the verb. An important question is what underlies such verb-mediated anticipatory eye gaze. Based on empirical and theoretical suggestions, we…
ERIC Educational Resources Information Center
Rabab'h, Belal; Veloo, Arsaythamby
2015-01-01
Jordanian 8th grade students revealed low achievement in mathematics through four periods (1999, 2003, 2007 & 2011) of Trends in International Mathematics and Science Study (TIMSS). This study aimed to determine whether spatial visualization mediates the affect of Mathematics Learning Strategies (MLS) factors namely mathematics attitude,…
Kingery, Kathleen M; Narad, Megan; Garner, Annie A; Antonini, Tanya N; Tamm, Leanne; Epstein, Jeffery N
2015-08-01
The purpose of the research study was to determine whether ADHD- and texting-related driving impairments are mediated by extended visual glances away from the roadway. Sixty-one adolescents (ADHD =28, non-ADHD =33; 62% male; 11% minority) aged 16-17 with a valid driver's license were videotaped while engaging in a driving simulation that included a No Distraction, Hands-Free Phone Conversation, and Texting condition. Two indicators of visual inattention were coded: 1) percentage of time with eyes diverted from the roadway; and 2) number of extended (greater than 2 s) visual glances away from the roadway. Adolescents with ADHD displayed significantly more visual inattention to the roadway on both visual inattention measures. Increased lane position variability among adolescents with ADHD compared to those without ADHD during the Hands-Free Phone Conversation and Texting conditions was mediated by an increased number of extended glances away from the roadway. Similarly, texting resulted in decreased visual attention to the roadway. Finally, increased lane position variability during texting was also mediated by the number of extended glances away from the roadway. Both ADHD and texting impair visual attention to the roadway and the consequence of this visual inattention is increased lane position variability. Visual inattention is implicated as a possible mechanism for ADHD- and texting-related deficits and suggests that driving interventions designed to address ADHD- or texting-related deficits in adolescents need to focus on decreasing extended glances away from the roadway.
Image Feature Types and Their Predictions of Aesthetic Preference and Naturalness
Ibarra, Frank F.; Kardan, Omid; Hunter, MaryCarol R.; Kotabe, Hiroki P.; Meyer, Francisco A. C.; Berman, Marc G.
2017-01-01
Previous research has investigated ways to quantify visual information of a scene in terms of a visual processing hierarchy, i.e., making sense of visual environment by segmentation and integration of elementary sensory input. Guided by this research, studies have developed categories for low-level visual features (e.g., edges, colors), high-level visual features (scene-level entities that convey semantic information such as objects), and how models of those features predict aesthetic preference and naturalness. For example, in Kardan et al. (2015a), 52 participants provided aesthetic preference and naturalness ratings, which are used in the current study, for 307 images of mixed natural and urban content. Kardan et al. (2015a) then developed a model using low-level features to predict aesthetic preference and naturalness and could do so with high accuracy. What has yet to be explored is the ability of higher-level visual features (e.g., horizon line position relative to viewer, geometry of building distribution relative to visual access) to predict aesthetic preference and naturalness of scenes, and whether higher-level features mediate some of the association between the low-level features and aesthetic preference or naturalness. In this study we investigated these relationships and found that low- and high- level features explain 68.4% of the variance in aesthetic preference ratings and 88.7% of the variance in naturalness ratings. Additionally, several high-level features mediated the relationship between the low-level visual features and aaesthetic preference. In a multiple mediation analysis, the high-level feature mediators accounted for over 50% of the variance in predicting aesthetic preference. These results show that high-level visual features play a prominent role predicting aesthetic preference, but do not completely eliminate the predictive power of the low-level visual features. These strong predictors provide powerful insights for future research relating to landscape and urban design with the aim of maximizing subjective well-being, which could lead to improved health outcomes on a larger scale. PMID:28503158
FLEXnav: a fuzzy logic expert dead-reckoning system for the Segway RMP
NASA Astrophysics Data System (ADS)
Ojeda, Lauro; Raju, Mukunda; Borenstein, Johann
2004-09-01
Most mobile robots use a combination of absolute and relative sensing techniques for position estimation. Relative positioning techniques are generally known as dead-reckoning. Many systems use odometry as their only dead-reckoning means. However, in recent years fiber optic gyroscopes have become more affordable and are being used on many platforms to supplement odometry, especially in indoor applications. Still, if the terrain is not level (i.e., rugged or rolling terrain), the tilt of the vehicle introduces errors into the conversion of gyro readings to vehicle heading. In order to overcome this problem vehicle tilt must be measured and factored into the heading computation. A unique new mobile robot is the Segway Robotics Mobility Platform (RMP). This functionally close relative of the innovative Segway Human Transporter (HT) stabilizes a statically unstable single-axle robot dynamically, based on the principle of the inverted pendulum. While this approach works very well for human transportation, it introduces as unique set of challenges to navigation equipment using an onboard gyro. This is due to the fact that in operation the Segway RMP constantly changes its forward tilt, to prevent dynamically falling over. This paper introduces our new Fuzzy Logic Expert rule-based navigation (FLEXnav) method for fusing data from multiple gyroscopes and accelerometers in order to estimate accurately the attitude (i.e., heading and tilt) of a mobile robot. The attitude information is then further fused with wheel encoder data to estimate the three-dimensional position of the mobile robot. We have further extended this approach to include the special conditions of operation on the Segway RMP. The paper presents experimental results of a Segway RMP equipped with our system and running over moderately rugged terrain.
Murray, Jennifer; Williams, Brian; Hoskins, Gaylor; Skar, Silje; McGhee, John; Treweek, Shaun; Sniehotta, Falko F; Sheikh, Aziz; Brown, Gordon; Hagen, Suzanne; Cameron, Linda; Jones, Claire; Gauld, Dylan
2016-01-01
Visualisation techniques are used in a range of healthcare interventions. However, these frequently lack a coherent rationale or clear theoretical basis. This lack of definition and explicit targeting of the underlying mechanisms may impede the success of and evaluation of the intervention. We describe the theoretical development, deployment, and pilot evaluation, of a complex visually mediated behavioural intervention. The exemplar intervention focused on increasing physical activity among young people with asthma. We employed an explicit five-stage development model, which was actively supported by a consultative user group. The developmental stages involved establishing the theoretical basis, establishing a narrative structure, visual rendering, checking interpretation, and pilot testing. We conducted in-depth interviews and focus groups during early development and checking, followed by an online experiment for pilot testing. A total of 91 individuals, including young people with asthma, parents, teachers, and health professionals, were involved in development and testing. Our final intervention consisted of two components: (1) an interactive 3D computer animation to create intentions and (2) an action plan and volitional help sheet to promote the translation of intentions to behaviour. Theory was mediated throughout by visual and audio forms. The intervention was regarded as highly acceptable, engaging, and meaningful by all stakeholders. The perceived impact on asthma understanding and intentions was reported positively, with most individuals saying that the 3D computer animation had either clarified a range of issues or made them more real. Our five-stage model underpinned by extensive consultation worked well and is presented as a framework to support explicit decision-making for others developing theory informed visually mediated interventions. We have demonstrated the ability to develop theory-based visually mediated behavioural interventions. However, attention needs to be paid to the potential ambiguity associated with images and thus the concept of visual literacy among patients. Our revised model may be helpful as a guide to aid development, acceptability, and ultimately effectiveness.
The relationship of global form and motion detection to reading fluency.
Englund, Julia A; Palomares, Melanie
2012-08-15
Visual motion processing in typical and atypical readers has suggested aspects of reading and motion processing share a common cortical network rooted in dorsal visual areas. Few studies have examined the relationship between reading performance and visual form processing, which is mediated by ventral cortical areas. We investigated whether reading fluency correlates with coherent motion detection thresholds in typically developing children using random dot kinematograms. As a comparison, we also evaluated the correlation between reading fluency and static form detection thresholds. Results show that both dorsal and ventral visual functions correlated with components of reading fluency, but that they have different developmental characteristics. Motion coherence thresholds correlated with reading rate and accuracy, which both improved with chronological age. Interestingly, when controlling for non-verbal abilities and age, reading accuracy significantly correlated with thresholds for coherent form detection but not coherent motion detection in typically developing children. Dorsal visual functions that mediate motion coherence seem to be related maturation of broad cognitive functions including non-verbal abilities and reading fluency. However, ventral visual functions that mediate form coherence seem to be specifically related to accurate reading in typically developing children. Copyright © 2012 Elsevier Ltd. All rights reserved.
Kingery, Kathleen M.; Narad, Megan; Garner, Annie A.; Antonini, Tanya N.; Tamm, Leanne; Epstein, Jeffery N.
2014-01-01
The purpose of the research study was to determine whether ADHD- and texting-related driving impairments are mediated by extended visual glances away from the roadway. Sixty-one adolescents (ADHD = 28, non-ADHD = 33; 62% male; 11% minority) aged 16–17 with a valid driver’s license were videotaped while engaging in a driving simulation that included a No Distraction, Hands-Free Phone Conversation, and Texting condition. Two indicators of visual inattention were coded: 1) percentage of time with eyes diverted from the roadway; and 2) number of extended (greater than 2 seconds) visual glances away from the roadway. Adolescents with ADHD displayed significantly more visual inattention to the roadway on both visual inattention measures. Increased lane position variability among adolescents with ADHD compared to those without ADHD during the Hands-Free Phone Conversation and Texting conditions was mediated by an increased number of extended glances away from the roadway. Similarly, texting resulted in decreased visual attention to the roadway. Finally, increased lane position variability during texting was also mediated by the number of extended glances away from the roadway. Both ADHD and texting impair visual attention to the roadway and the consequence of this visual inattention is increased lane position variability. Visual inattention is implicated as a possible mechanism for ADHD- and texting-related deficits and suggests that driving interventions designed to address ADHD- or texting-related deficits in adolescents need to focus on decreasing extended glances away from the roadway. PMID:25416444
Maximally Informative Statistics for Localization and Mapping
NASA Technical Reports Server (NTRS)
Deans, Matthew C.
2001-01-01
This paper presents an algorithm for localization and mapping for a mobile robot using monocular vision and odometry as its means of sensing. The approach uses the Variable State Dimension filtering (VSDF) framework to combine aspects of Extended Kalman filtering and nonlinear batch optimization. This paper describes two primary improvements to the VSDF. The first is to use an interpolation scheme based on Gaussian quadrature to linearize measurements rather than relying on analytic Jacobians. The second is to replace the inverse covariance matrix in the VSDF with its Cholesky factor to improve the computational complexity. Results of applying the filter to the problem of localization and mapping with omnidirectional vision are presented.
Visual Stress in Adults with and without Dyslexia
ERIC Educational Resources Information Center
Singleton, Chris; Trotter, Susannah
2005-01-01
The relationship between dyslexia and visual stress (sometimes known as Meares-Irlen syndrome) is uncertain. While some theorists have hypothesised an aetiological link between the two conditions, mediated by the magnocellular visual system, at the present time the predominant theories of dyslexia and visual stress see them as distinct, unrelated…
Lim, Seung-Lark; Padmala, Srikanth; Pessoa, Luiz
2009-01-01
If the amygdala is involved in shaping perceptual experience when affectively significant visual items are encountered, responses in this structure should be correlated with both visual cortex responses and behavioral reports. Here, we investigated how affective significance shapes visual perception during an attentional blink paradigm combined with aversive conditioning. Behaviorally, following aversive learning, affectively significant scenes (CS+) were better detected than neutral (CS−) ones. In terms of mean brain responses, both amygdala and visual cortical responses were stronger during CS+ relative to CS− trials. Increased brain responses in these regions were associated with improved behavioral performance across participants and followed a mediation-like pattern. Importantly, the mediation pattern was observed in a trial-by-trial analysis, revealing that the specific pattern of trial-by-trial variability in brain responses was closely related to single-trial behavioral performance. Furthermore, the influence of the amygdala on visual cortical responses was consistent with a mediation, although partial, via frontal brain regions. Our results thus suggest that affective significance potentially determines the fate of a visual item during competitive interactions by enhancing sensory processing through both direct and indirect paths. In so doing, the amygdala helps separate the significant from the mundane. PMID:19805383
Effects of peer-mediated implementation of visual scripts in middle school.
Ganz, Jennifer B; Heath, Amy K; Lund, Emily M; Camargo, Siglia P H; Rispoli, Mandy J; Boles, Margot; Plaisance, Lauren
2012-05-01
Although research has investigated the impact of peer-mediated interventions and visual scripts on social and communication skills in children with autism spectrum disorders, no studies to date have investigated peer-mediated implementation of scripts. This study investigated the effects of peer-implemented scripts on a middle school student with autism, intellectual impairments, and speech-language impairment via a multiple baseline single-case research design across behaviors. The target student demonstrated improvements in three communicative behaviors when implemented by a trained peer; however, behaviors did not generalize to use with an untrained typically developing peer.
Lowery, Rebecca L; Tremblay, Marie-Eve; Hopkins, Brittany E; Majewska, Ania K
2017-11-01
Microglia have recently been implicated as key regulators of activity-dependent plasticity, where they contribute to the removal of inappropriate or excess synapses. However, the molecular mechanisms that mediate this microglial function are still not well understood. Although multiple studies have implicated fractalkine signaling as a mediator of microglia-neuron communications during synaptic plasticity, it is unclear whether this is a universal signaling mechanism or whether its role is limited to specific brain regions and stages of the lifespan. Here, we examined whether fractalkine signaling mediates microglial contributions to activity-dependent plasticity in the developing and adolescent visual system. Using genetic ablation of fractalkine's cognate receptor, CX 3 CR1, and both ex vivo characterization and in vivo imaging in mice, we examined whether fractalkine signaling is required for microglial dynamics and modulation of synapses, as well as activity-dependent plasticity in the visual system. We did not find a role for fractalkine signaling in mediating microglial properties during visual plasticity. Ablation of CX 3 CR1 had no effect on microglial density, distribution, morphology, or motility, in either adolescent or young adult mice across brain regions that include the visual cortex. Ablation of CX 3 CR1 also had no effect on baseline synaptic turnover or contact dynamics between microglia and neurons. Finally, we found that fractalkine signaling is not required for either early or late forms of activity-dependent visual system plasticity. These findings suggest that fractalkine is not a universal regulator of synaptic plasticity, but rather has heterogeneous roles in specific brain regions and life stages. © 2017 Wiley Periodicals, Inc.
Continuous Mapping of Tunnel Walls in a Gnss-Denied Environment
NASA Astrophysics Data System (ADS)
Chapman, Michael A.; Min, Cao; Zhang, Deijin
2016-06-01
The need for reliable systems for capturing precise detail in tunnels has increased as the number of tunnels (e.g., for cars and trucks, trains, subways, mining and other infrastructure) has increased and the age of these structures and, subsequent, deterioration has introduced structural degradations and eventual failures. Due to the hostile environments encountered in tunnels, mobile mapping systems are plagued with various problems such as loss of GNSS signals, drift of inertial measurements systems, low lighting conditions, dust and poor surface textures for feature identification and extraction. A tunnel mapping system using alternate sensors and algorithms that can deliver precise coordinates and feature attributes from surfaces along the entire tunnel path is presented. This system employs image bridging or visual odometry to estimate precise sensor positions and orientations. The fundamental concept is the use of image sequences to geometrically extend the control information in the absence of absolute positioning data sources. This is a non-trivial problem due to changes in scale, perceived resolution, image contrast and lack of salient features. The sensors employed include forward-looking high resolution digital frame cameras coupled with auxiliary light sources. In addition, a high frequency lidar system and a thermal imager are included to offer three dimensional point clouds of the tunnel walls along with thermal images for moisture detection. The mobile mapping system is equipped with an array of 16 cameras and light sources to capture the tunnel walls. Continuous images are produced using a semi-automated mosaicking process. Results of preliminary experimentation are presented to demonstrate the effectiveness of the system for the generation of seamless precise tunnel maps.
Automatic Guidance of Visual Attention from Verbal Working Memory
ERIC Educational Resources Information Center
Soto, David; Humphreys, Glyn W.
2007-01-01
Previous studies have shown that visual attention can be captured by stimuli matching the contents of working memory (WM). Here, the authors assessed the nature of the representation that mediates the guidance of visual attention from WM. Observers were presented with either verbal or visual primes (to hold in memory, Experiment 1; to verbalize,…
NASA Astrophysics Data System (ADS)
Emter, Thomas; Petereit, Janko
2014-05-01
An integrated multi-sensor fusion framework for localization and mapping for autonomous navigation in unstructured outdoor environments based on extended Kalman filters (EKF) is presented. The sensors for localization include an inertial measurement unit, a GPS, a fiber optic gyroscope, and wheel odometry. Additionally a 3D LIDAR is used for simultaneous localization and mapping (SLAM). A 3D map is built while concurrently a localization in a so far established 2D map is estimated with the current scan of the LIDAR. Despite of longer run-time of the SLAM algorithm compared to the EKF update, a high update rate is still guaranteed by sophisticatedly joining and synchronizing two parallel localization estimators.
Attentional Modulation in Visual Cortex Is Modified during Perceptual Learning
ERIC Educational Resources Information Center
Bartolucci, Marco; Smith, Andrew T.
2011-01-01
Practicing a visual task commonly results in improved performance. Often the improvement does not transfer well to a new retinal location, suggesting that it is mediated by changes occurring in early visual cortex, and indeed neuroimaging and neurophysiological studies both demonstrate that perceptual learning is associated with altered activity…
Mui, Amanda M.; Yang, Victoria; Aung, Moe H.; Fu, Jieming; Adekunle, Adewumi N.; Prall, Brian C.; Sidhu, Curran S.; Park, Han na; Boatright, Jeffrey H.; Iuvone, P. Michael
2018-01-01
Visual experience during the critical period modulates visual development such that deprivation causes visual impairments while stimulation induces enhancements. This study aimed to determine whether visual stimulation in the form of daily optomotor response (OMR) testing during the mouse critical period (1) improves aspects of visual function, (2) involves retinal mechanisms and (3) is mediated by brain derived neurotrophic factor (BDNF) and dopamine (DA) signaling pathways. We tested spatial frequency thresholds in C57BL/6J mice daily from postnatal days 16 to 23 (P16 to P23) using OMR testing. Daily OMR-treated mice were compared to littermate controls that were placed in the OMR chamber without moving gratings. Contrast sensitivity thresholds, electroretinograms (ERGs), visual evoked potentials, and pattern ERGs were acquired at P21. To determine the role of BDNF signaling, a TrkB receptor antagonist (ANA-12) was systemically injected 2 hours prior to OMR testing in another cohort of mice. BDNF immunohistochemistry was performed on retina and brain sections. Retinal DA levels were measured using high-performance liquid chromatography. Daily OMR testing enhanced spatial frequency thresholds and contrast sensitivity compared to controls. OMR-treated mice also had improved rod-driven ERG oscillatory potential response times, greater BDNF immunoreactivity in the retinal ganglion cell layer, and increased retinal DA content compared to controls. VEPs and pattern ERGs were unchanged. Systemic delivery of ANA-12 attenuated OMR-induced visual enhancements. Daily OMR testing during the critical period leads to general visual function improvements accompanied by increased DA and BDNF in the retina, with this process being requisitely mediated by TrkB activation. These results suggest that novel combination therapies involving visual stimulation and using both behavioral and molecular approaches may benefit degenerative retinal diseases or amblyopia. PMID:29408880
Visual training improves perceptual grouping based on basic stimulus features.
Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M
2017-10-01
Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.
Introduction to the special issue on visual working memory.
Wolfe, Jeremy M
2014-10-01
Visual working memory is a volatile, limited-capacity memory that appears to play an important role in our impression of a visual world that is continuous in time. It also mediates between the contents of the mind and the contents of that visual world. Research on visual working memory has become increasingly prominent in recent years. The articles in this special issue of Attention, Perception, & Psychophysics describe new empirical findings and theoretical understandings of the topic.
Brooks, Joseph L.; Gilaie-Dotan, Sharon; Rees, Geraint; Bentin, Shlomo; Driver, Jon
2012-01-01
Visual perception depends not only on local stimulus features but also on their relationship to the surrounding stimulus context, as evident in both local and contextual influences on figure-ground segmentation. Intermediate visual areas may play a role in such contextual influences, as we tested here by examining LG, a rare case of developmental visual agnosia. LG has no evident abnormality of brain structure and functional neuroimaging showed relatively normal V1 function, but his intermediate visual areas (V2/V3) function abnormally. We found that contextual influences on figure-ground organization were selectively disrupted in LG, while local sources of figure-ground influences were preserved. Effects of object knowledge and familiarity on figure-ground organization were also significantly diminished. Our results suggest that the mechanisms mediating contextual and familiarity influences on figure-ground organization are dissociable from those mediating local influences on figure-ground assignment. The disruption of contextual processing in intermediate visual areas may play a role in the substantial object recognition difficulties experienced by LG. PMID:22947116
Rhythmic Oscillations of Visual Contrast Sensitivity Synchronized with Action
Tomassini, Alice; Spinelli, Donatella; Jacono, Marco; Sandini, Giulio; Morrone, Maria Concetta
2016-01-01
It is well known that the motor and the sensory systems structure sensory data collection and cooperate to achieve an efficient integration and exchange of information. Increasing evidence suggests that both motor and sensory functions are regulated by rhythmic processes reflecting alternating states of neuronal excitability, and these may be involved in mediating sensory-motor interactions. Here we show an oscillatory fluctuation in early visual processing time locked with the execution of voluntary action, and, crucially, even for visual stimuli irrelevant to the motor task. Human participants were asked to perform a reaching movement toward a display and judge the orientation of a Gabor patch, near contrast threshold, briefly presented at random times before and during the reaching movement. When the data are temporally aligned to the onset of movement, visual contrast sensitivity oscillates with periodicity within the theta band. Importantly, the oscillations emerge during the motor planning stage, ~500 ms before movement onset. We suggest that brain oscillatory dynamics may mediate an automatic coupling between early motor planning and early visual processing, possibly instrumental in linking and closing up the visual-motor control loop. PMID:25948254
Semantics of the visual environment encoded in parahippocampal cortex
Bonner, Michael F.; Price, Amy Rose; Peelle, Jonathan E.; Grossman, Murray
2016-01-01
Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain. PMID:26679216
Semantics of the Visual Environment Encoded in Parahippocampal Cortex.
Bonner, Michael F; Price, Amy Rose; Peelle, Jonathan E; Grossman, Murray
2016-03-01
Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together, this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain.
Shen, Wei; Qu, Qingqing; Tong, Xiuhong
2018-05-01
The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.
Barker, Lynne Ann; Morton, Nicholas; Romanowski, Charles A J; Gosden, Kevin
2013-10-24
We report a rare case of a patient unable to read (alexic) and write (agraphic) after a mild head injury. He had preserved speech and comprehension, could spell aloud, identify words spelt aloud and copy letter features. He was unable to visualise letters but showed no problems with digits. Neuropsychological testing revealed general visual memory, processing speed and imaging deficits. Imaging data revealed an 8 mm colloid cyst of the third ventricle that splayed the fornix. Little is known about functions mediated by fornical connectivity, but this region is thought to contribute to memory recall. Other regions thought to mediate letter recognition and letter imagery, visual word form area and visual pathways were intact. We remediated reading and writing by multimodal letter retraining. The study raises issues about the neural substrates of reading, role of fornical tracts to selective memory in the absence of other pathology, and effective remediation strategies for selective functional deficits.
Memory-guided saccade processing in visual form agnosia (patient DF).
Rossit, Stéphanie; Szymanek, Larissa; Butler, Stephen H; Harvey, Monika
2010-01-01
According to Milner and Goodale's model (The visual brain in action, Oxford University Press, Oxford, 2006) areas in the ventral visual stream mediate visual perception and oV-line actions, whilst regions in the dorsal visual stream mediate the on-line visual control of action. Strong evidence for this model comes from a patient (DF), who suffers from visual form agnosia after bilateral damage to the ventro-lateral occipital region, sparing V1. It has been reported that she is normal in immediate reaching and grasping, yet severely impaired when asked to perform delayed actions. Here we investigated whether this dissociation would extend to saccade execution. Neurophysiological studies and TMS work in humans have shown that the posterior parietal cortex (PPC), on the right in particular (supposedly spared in DF), is involved in the control of memory-guided saccades. Surprisingly though, we found that, just as reported for reaching and grasping, DF's saccadic accuracy was much reduced in the memory compared to the stimulus-guided condition. These data support the idea of a tight coupling of eye and hand movements and further suggest that dorsal stream structures may not be sufficient to drive memory-guided saccadic performance.
NASA Astrophysics Data System (ADS)
Willis, Andrew R.; Brink, Kevin M.
2016-06-01
This article describes a new 3D RGBD image feature, referred to as iGRaND, for use in real-time systems that use these sensors for tracking, motion capture, or robotic vision applications. iGRaND features use a novel local reference frame derived from the image gradient and depth normal (hence iGRaND) that is invariant to scale and viewpoint for Lambertian surfaces. Using this reference frame, Euclidean invariant feature components are computed at keypoints which fuse local geometric shape information with surface appearance information. The performance of the feature for real-time odometry is analyzed and its computational complexity and accuracy is compared with leading alternative 3D features.
NASA Technical Reports Server (NTRS)
Klingelhoefer, G.; Rodionov, D. S.; Morris, R. V.; Schroeder, C.; deSouza, P. A.; Ming, D. W.; Yen, A. S.; Bernhardt, B.; Renz, F.; Fleischer, I.
2005-01-01
The miniaturized Mossbauer (MB) spectrometer MIMOS II [1] is part of the Athena payload of NASA s twin Mars Exploration Rovers "Spirit" (MER-A) and "Opportunity" (MER-B). It determines the Fe-bearing mineralogy of Martian soils and rocks at the Rovers respective landing sites, Gusev crater and Meridiani Planum. Both spectrometers performed successfully during first year of operation. Total integration time is about 49 days for MERA (79 samples) and 34 days for MER-B (85 samples). For curiosity it might be interesting to mention that the total odometry of the oscillating part of the MB drive exceeds 35 km for both rovers.
Pottage, Claire L; Schaefer, Alexandre
2012-02-01
The emotional enhancement of memory is often thought to be determined by attention. However, recent evidence using divided attention paradigms suggests that attention does not play a significant role in the formation of memories for aversive pictures. We report a study that investigated this question using a paradigm in which participants had to encode lists of randomly intermixed negative and neutral pictures under conditions of full attention and divided attention followed by a free recall test. Attention was divided by a highly demanding concurrent task tapping visual processing resources. Results showed that the advantage in recall for aversive pictures was still present in the DA condition. However, mediation analyses also revealed that concurrent task performance significantly mediated the emotional enhancement of memory under divided attention. This finding suggests that visual attentional processes play a significant role in the formation of emotional memories. PsycINFO Database Record (c) 2012 APA, all rights reserved
Price, D; Tyler, L K; Neto Henriques, R; Campbell, K L; Williams, N; Treder, M S; Taylor, J R; Henson, R N A
2017-06-09
Slowing is a common feature of ageing, yet a direct relationship between neural slowing and brain atrophy is yet to be established in healthy humans. We combine magnetoencephalographic (MEG) measures of neural processing speed with magnetic resonance imaging (MRI) measures of white and grey matter in a large population-derived cohort to investigate the relationship between age-related structural differences and visual evoked field (VEF) and auditory evoked field (AEF) delay across two different tasks. Here we use a novel technique to show that VEFs exhibit a constant delay, whereas AEFs exhibit delay that accumulates over time. White-matter (WM) microstructure in the optic radiation partially mediates visual delay, suggesting increased transmission time, whereas grey matter (GM) in auditory cortex partially mediates auditory delay, suggesting less efficient local processing. Our results demonstrate that age has dissociable effects on neural processing speed, and that these effects relate to different types of brain atrophy.
Price, D.; Tyler, L. K.; Neto Henriques, R.; Campbell, K. L.; Williams, N.; Treder, M.S.; Taylor, J. R.; Brayne, Carol; Bullmore, Edward T.; Calder, Andrew C.; Cusack, Rhodri; Dalgleish, Tim; Duncan, John; Matthews, Fiona E.; Marslen-Wilson, William D.; Rowe, James B.; Shafto, Meredith A.; Cheung, Teresa; Davis, Simon; Geerligs, Linda; Kievit, Rogier; McCarrey, Anna; Mustafa, Abdur; Samu, David; Tsvetanov, Kamen A.; van Belle, Janna; Bates, Lauren; Emery, Tina; Erzinglioglu, Sharon; Gadie, Andrew; Gerbase, Sofia; Georgieva, Stanimira; Hanley, Claire; Parkin, Beth; Troy, David; Auer, Tibor; Correia, Marta; Gao, Lu; Green, Emma; Allen, Jodie; Amery, Gillian; Amunts, Liana; Barcroft, Anne; Castle, Amanda; Dias, Cheryl; Dowrick, Jonathan; Fair, Melissa; Fisher, Hayley; Goulding, Anna; Grewal, Adarsh; Hale, Geoff; Hilton, Andrew; Johnson, Frances; Johnston, Patricia; Kavanagh-Williamson, Thea; Kwasniewska, Magdalena; McMinn, Alison; Norman, Kim; Penrose, Jessica; Roby, Fiona; Rowland, Diane; Sargeant, John; Squire, Maggie; Stevens, Beth; Stoddart, Aldabra; Stone, Cheryl; Thompson, Tracy; Yazlik, Ozlem; Barnes, Dan; Dixon, Marie; Hillman, Jaya; Mitchell, Joanne; Villis, Laura; Henson, R. N. A.
2017-01-01
Slowing is a common feature of ageing, yet a direct relationship between neural slowing and brain atrophy is yet to be established in healthy humans. We combine magnetoencephalographic (MEG) measures of neural processing speed with magnetic resonance imaging (MRI) measures of white and grey matter in a large population-derived cohort to investigate the relationship between age-related structural differences and visual evoked field (VEF) and auditory evoked field (AEF) delay across two different tasks. Here we use a novel technique to show that VEFs exhibit a constant delay, whereas AEFs exhibit delay that accumulates over time. White-matter (WM) microstructure in the optic radiation partially mediates visual delay, suggesting increased transmission time, whereas grey matter (GM) in auditory cortex partially mediates auditory delay, suggesting less efficient local processing. Our results demonstrate that age has dissociable effects on neural processing speed, and that these effects relate to different types of brain atrophy. PMID:28598417
ERIC Educational Resources Information Center
Howley, Sarah A.; Prasad, Sarah E.; Pender, Niall P.; Murphy, Kieran C.
2012-01-01
22q11.2 Deletion Syndrome (22q11DS) is a common microdeletion disorder associated with mild to moderate intellectual disability and specific neurocognitive deficits, particularly in visual-motor and attentional abilities. Currently there is evidence that the visual-motor profile of 22q11DS is not entirely mediated by intellectual disability and…
Stephen, Ian D; Sturman, Daniel; Stevenson, Richard J; Mond, Jonathan; Brooks, Kevin R
2018-01-01
Body size misperception-the belief that one is larger or smaller than reality-affects a large and growing segment of the population. Recently, studies have shown that exposure to extreme body stimuli results in a shift in the point of subjective normality, suggesting that visual adaptation may be a mechanism by which body size misperception occurs. Yet, despite being exposed to a similar set of bodies, some individuals within a given geographical area will develop body size misperception and others will not. The reason for these individual difference is currently unknown. One possible explanation stems from the observation that women with lower levels of body satisfaction have been found to pay more attention to images of thin bodies. However, while attention has been shown to enhance visual adaptation effects in low (e.g. rotational and linear motion) and high level stimuli (e.g., facial gender), it is not known whether this effect exists in visual adaptation to body size. Here, we test the hypothesis that there is an indirect effect of body satisfaction on the direction and magnitude of the body fat adaptation effect, mediated via visual attention (i.e., selectively attending to images of thin over fat bodies or vice versa). Significant mediation effects were found in both men and women, suggesting that observers' level of body satisfaction may influence selective visual attention to thin or fat bodies, which in turn influences the magnitude and direction of visual adaptation to body size. This may provide a potential mechanism by which some individuals develop body size misperception-a risk factor for eating disorders, compulsive exercise behaviour and steroid abuse-while others do not.
Sturman, Daniel; Stevenson, Richard J.; Mond, Jonathan; Brooks, Kevin R.
2018-01-01
Body size misperception–the belief that one is larger or smaller than reality–affects a large and growing segment of the population. Recently, studies have shown that exposure to extreme body stimuli results in a shift in the point of subjective normality, suggesting that visual adaptation may be a mechanism by which body size misperception occurs. Yet, despite being exposed to a similar set of bodies, some individuals within a given geographical area will develop body size misperception and others will not. The reason for these individual difference is currently unknown. One possible explanation stems from the observation that women with lower levels of body satisfaction have been found to pay more attention to images of thin bodies. However, while attention has been shown to enhance visual adaptation effects in low (e.g. rotational and linear motion) and high level stimuli (e.g., facial gender), it is not known whether this effect exists in visual adaptation to body size. Here, we test the hypothesis that there is an indirect effect of body satisfaction on the direction and magnitude of the body fat adaptation effect, mediated via visual attention (i.e., selectively attending to images of thin over fat bodies or vice versa). Significant mediation effects were found in both men and women, suggesting that observers’ level of body satisfaction may influence selective visual attention to thin or fat bodies, which in turn influences the magnitude and direction of visual adaptation to body size. This may provide a potential mechanism by which some individuals develop body size misperception–a risk factor for eating disorders, compulsive exercise behaviour and steroid abuse–while others do not. PMID:29385137
The artist emerges: visual art learning alters neural structure and function.
Schlegel, Alexander; Alexander, Prescott; Fogelson, Sergey V; Li, Xueting; Lu, Zhengang; Kohler, Peter J; Riley, Enrico; Tse, Peter U; Meng, Ming
2015-01-15
How does the brain mediate visual artistic creativity? Here we studied behavioral and neural changes in drawing and painting students compared to students who did not study art. We investigated three aspects of cognition vital to many visual artists: creative cognition, perception, and perception-to-action. We found that the art students became more creative via the reorganization of prefrontal white matter but did not find any significant changes in perceptual ability or related neural activity in the art students relative to the control group. Moreover, the art students improved in their ability to sketch human figures from observation, and multivariate patterns of cortical and cerebellar activity evoked by this drawing task became increasingly separable between art and non-art students. Our findings suggest that the emergence of visual artistic skills is supported by plasticity in neural pathways that enable creative cognition and mediate perceptuomotor integration. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Brooks, Brian E.; Cooper, Eric E.
2006-01-01
Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…
Weng, Xiaoqian; Li, Guangze; Li, Rongbao
2016-08-01
This study examined the mediating role of working memory (WM) in the relation between rapid automatized naming (RAN) and Chinese reading comprehension. Three tasks assessing differentially visual and verbal components of WM were programmed by E-prime 2.0. Data collected from 55 Chinese college students were analyzed using correlations and hierarchical regression methods to determine the connection among RAN, reading comprehension, and WM components. Results showed that WM played a significant mediating role in the RAN-reading relation and that auditory WM made stronger contributions than visual WM. Taking into account of the multi-component nature of WM and the specificity of Chinese reading processing, this study discussed the mediating powers of the WM components, particularly auditory WM, further clarifying the possible components involved in the RAN-reading relation and thus providing some insight into the complicated Chinese reading process.
Do attentional capacities and processing speed mediate the effect of age on executive functioning?
Gilsoul, Jessica; Simon, Jessica; Hogge, Michaël; Collette, Fabienne
2018-02-06
The executive processes are well known to decline with age, and similar data also exists for attentional capacities and processing speed. Therefore, we investigated whether these two last nonexecutive variables would mediate the effect of age on executive functions (inhibition, shifting, updating, and dual-task coordination). We administered a large battery of executive, attentional and processing speed tasks to 104 young and 71 older people, and we performed mediation analyses with variables showing a significant age effect. All executive and processing speed measures showed age-related effects while only the visual scanning task performance (selective attention) was explained by age when controlled for gender and educational level. Regarding mediation analyses, visual scanning partially mediated the age effect on updating while processing speed partially mediated the age effect on shifting, updating and dual-task coordination. In a more exploratory way, inhibition was also found to partially mediate the effect of age on the three other executive functions. Attention did not greatly influence executive functioning in aging while, in agreement with the literature, processing speed seems to be a major mediator of the age effect on these processes. Interestingly, the global pattern of results seems also to indicate an influence of inhibition but further studies are needed to confirm the role of that variable as a mediator and its relative importance by comparison with processing speed.
The effect of age on fluid intelligence is fully mediated by physical health.
Bergman, Ingvar; Almkvist, Ove
2013-01-01
The present study investigated the extent to which the effect of age on cognitive ability is predicted by individual differences in physical health. The sample consisted of 118 volunteer subjects who were healthy and ranging in age from 26 to 91. The examinations included a clinical investigation, magnetic resonance imaging (MRI) brain neuroimaging, and a comprehensive neuropsychological assessment. The effect of age on fluid IQ with and without visual spatial praxis and on crystallized IQ was tested whether being fully-, partially- or non-mediated by physical health. Structural equation analyses showed that the best and most parsimonious fit to the data was provided by models that were fully mediated for fluid IQ without praxis, non-mediated for crystallized IQ and partially mediated for fluid IQ with praxis. The diseases of the circulatory and nervous systems were the major mediators. It was concluded from the pattern of findings that the effect of age on fluid intelligence is fully mediated by physical health, while crystallized intelligence is non-mediated and visual spatial praxis is partially mediated, influenced mainly by direct effects of age. Our findings imply that improving health by acting against the common age-related circulatory- and nervous system diseases and risk factors will oppose the decline in fluid intelligence with age. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Jane Kapler Smith; Donald E. Zimmerman; Carol Akerelrea; Garrett O' Keefe
2008-01-01
Natural resource managers use a variety of computer-mediated presentation methods to communicate management practices to the public. We explored the effects of using the Stand Visualization System to visualize and animate predictions from the Forest Vegetation Simulator-Fire and Fuels Extension in presentations explaining forest succession (forest growth and change...
ERIC Educational Resources Information Center
Zhou, Liu; He, Zijiang J.; Ooi, Teng Leng
2013-01-01
Dimly lit targets in the dark are perceived as located about an implicit slanted surface that delineates the visual system's intrinsic bias (Ooi, Wu, & He, 2001). If the intrinsic bias reflects the internal model of visual space--as proposed here--its influence should extend beyond target localization. Our first 2 experiments demonstrated that…
Language-Mediated Eye Movements in the Absence of a Visual World: The "Blank Screen Paradigm"
ERIC Educational Resources Information Center
Altmann, Gerry T. M.
2004-01-01
The "visual world paradigm" typically involves presenting participants with a visual scene and recording eye movements as they either hear an instruction to manipulate objects in the scene or as they listen to a description of what may happen to those objects. In this study, participants heard each target sentence only after the corresponding…
ERIC Educational Resources Information Center
Wu, Shiyu; Ma, Zheng
2017-01-01
Previous research has indicated that, in viewing a visual word, the activated phonological representation in turn activates its homophone, causing semantic interference. Using this mechanism of phonological mediation, this study investigated native-language phonological interference in visual recognition of Chinese two-character compounds by early…
Buschow, Christian; Charo, Jehad; Anders, Kathleen; Loddenkemper, Christoph; Jukica, Ana; Alsamah, Wisam; Perez, Cynthia; Willimsky, Gerald; Blankenstein, Thomas
2010-03-15
Visualizing oncogene/tumor Ag expression by noninvasive imaging is of great interest for understanding processes of tumor development and therapy. We established transgenic (Tg) mice conditionally expressing a fusion protein of the SV40 large T Ag and luciferase (TagLuc) that allows monitoring of oncogene/tumor Ag expression by bioluminescent imaging upon Cre recombinase-mediated activation. Independent of Cre-mediated recombination, the TagLuc gene was expressed at low levels in different tissues, probably due to the leakiness of the stop cassette. The level of spontaneous TagLuc expression, detected by bioluminescent imaging, varied between the different Tg lines, depended on the nature of the Tg expression cassette, and correlated with Tag-specific CTL tolerance. Following liver-specific Cre-loxP site-mediated excision of the stop cassette that separated the promoter from the TagLuc fusion gene, hepatocellular carcinoma development was visualized. The ubiquitous low level TagLuc expression caused the failure of transferred effector T cells to reject Tag-expressing tumors rather than causing graft-versus-host disease. This model may be useful to study different levels of tolerance, monitor tumor development at an early stage, and rapidly visualize the efficacy of therapeutic intervention versus potential side effects of low-level Ag expression in normal tissues.
Shen, Wei; Qu, Qingqing; Li, Xingshan
2016-07-01
In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.
Prefrontal cortex modulates posterior alpha oscillations during top-down guided visual perception
Helfrich, Randolph F.; Huang, Melody; Wilson, Guy; Knight, Robert T.
2017-01-01
Conscious visual perception is proposed to arise from the selective synchronization of functionally specialized but widely distributed cortical areas. It has been suggested that different frequency bands index distinct canonical computations. Here, we probed visual perception on a fine-grained temporal scale to study the oscillatory dynamics supporting prefrontal-dependent sensory processing. We tested whether a predictive context that was embedded in a rapid visual stream modulated the perception of a subsequent near-threshold target. The rapid stream was presented either rhythmically at 10 Hz, to entrain parietooccipital alpha oscillations, or arrhythmically. We identified a 2- to 4-Hz delta signature that modulated posterior alpha activity and behavior during predictive trials. Importantly, delta-mediated top-down control diminished the behavioral effects of bottom-up alpha entrainment. Simultaneous source-reconstructed EEG and cross-frequency directionality analyses revealed that this delta activity originated from prefrontal areas and modulated posterior alpha power. Taken together, this study presents converging behavioral and electrophysiological evidence for frontal delta-mediated top-down control of posterior alpha activity, selectively facilitating visual perception. PMID:28808023
Bilateral Symmetry of Visual Function Loss in Cone-Rod Dystrophies.
Galli-Resta, Lucia; Falsini, Benedetto; Rossi, Giuseppe; Piccardi, Marco; Ziccardi, Lucia; Fadda, Antonello; Minnella, Angelo; Marangoni, Dario; Placidi, Giorgio; Campagna, Francesca; Abed, Edoardo; Bertelli, Matteo; Zuntini, Monia; Resta, Giovanni
2016-07-01
To investigate bilateral symmetry of visual impairment in cone-rod dystrophy (CRD) patients and understand the feasibility of clinical trial designs treating one eye and using the untreated eye as an internal control. This was a retrospective study of visual function loss measures in 436 CRD patients followed at the Ophthalmology Department of the Catholic University in Rome. Clinical measures considered were best-corrected visual acuity, focal macular cone electroretinogram (fERG), and Ganzfeld cone-mediated and rod-mediated electroretinograms. Interocular agreement in each of these clinical indexes was assessed by t- and Wilcoxon tests for paired samples, structural (Deming) regression analysis, and intraclass correlation. Baseline and follow-up measures were analyzed. A separate analysis was performed on the subset of 61 CRD patients carrying likely disease-causing mutations in the ABCA4 gene. Statistical tests show a very high degree of bilateral symmetry in the extent and progression of visual impairment in the fellow eyes of CRD patients. These data contribute to a better understanding of CRDs and support the feasibility of clinical trial designs involving unilateral eye treatment with the use of fellow eye as internal control.
Giesbrecht, Barry; Sy, Jocelyn L.; Guerin, Scott A.
2012-01-01
Environmental context learned without awareness can facilitate visual processing of goal-relevant information. According to one view, the benefit of implicitly learned context relies on the neural systems involved in spatial attention and hippocampus-mediated memory. While this view has received empirical support, it contradicts traditional models of hippocampal function. The purpose of the present work was to clarify the influence of spatial context on visual search performance and on brain structures involved memory and attention. Event-related functional magnetic resonance imaging revealed that activity in the hippocampus as well as in visual and parietal cortex was modulated by learned visual context even though participants’ subjective reports and performance on a post-experiment recognition task indicated no explicit knowledge of the learned context. Moreover, the magnitude of the initial selective hippocampus response predicted the magnitude of the behavioral benefit due to context observed at the end of the experiment. The results suggest that implicit contextual learning is mediated by attention and memory and that these systems interact to support search of our environment. PMID:23099047
Ibrahim, Leena A.; Mesik, Lukas; Ji, Xu-ying; Fang, Qi; Li, Hai-fu; Li, Ya-tang; Zingg, Brian; Zhang, Li I.; Tao, Huizhong Whit
2016-01-01
Summary Cross-modality interaction in sensory perception is advantageous for animals’ survival. How cortical sensory processing is cross-modally modulated and what are the underlying neural circuits remain poorly understood. In mouse primary visual cortex (V1), we discovered that orientation selectivity of layer (L)2/3 but not L4 excitatory neurons was sharpened in the presence of sound or optogenetic activation of projections from primary auditory cortex (A1) to V1. The effect was manifested by decreased average visual responses yet increased responses at the preferred orientation. It was more pronounced at lower visual contrast, and was diminished by suppressing L1 activity. L1 neurons were strongly innervated by A1-V1 axons and excited by sound, while visual responses of L2/3 vasoactive intestinal peptide (VIP) neurons were suppressed by sound, both preferentially at the cell's preferred orientation. These results suggest that the cross-modality modulation is achieved primarily through L1 neuron and L2/3 VIP-cell mediated inhibitory and disinhibitory circuits. PMID:26898778
Kim, Ho-Joong; Kim, Sung-Chan; Kang, Kyoung-Tak; Chang, Bong-Soon; Lee, Choon-Ki; Yeom, Jin S
2014-05-01
Level IV, prospective case series. To investigate the influence of educational attainment on the level of pain intensity and disability in patients with lumbar spinal stenosis (LSS) and determine how coping behavior, such as catastrophizing, may mediate the association between educational attainment and clinical impairments. Educational attainment has been thought to influence disability caused by chronic painful disease, mediated by pain behavior or a coping strategy such as catastrophizing. Nevertheless, little is known about the role of educational attainment on pain intensity or disability related with LSS. A total of 155 patients who were diagnosed as degenerative LSS participated in the study. Data on detailed medical history, physical examination, and series of questionnaires were collected, including pain catastrophizing scale, Oswestry Disability Index, and visual analogue pain scale for back and leg pain. For measures of socioeconomic status, educational attainment and occupation were assessed. Radiological analysis was performed using magnetic resonance images and computed tomographic scans. After adjustment of covariates, multivariate regression analysis was used to assess each component of the proposed mediation models among visual analogue pain scale for back/leg pain, Oswestry Disability Index, the level of education, occupation and pain catastrophizing scale. Mediation was also assessed by the bootstrapping technique. Educational attainment was negatively correlated with pain intensity, disability, and catastrophizing. Pain catastrophizing were also significantly correlated with disability and pain intensity for back/leg pain in the patients with LSS. In the relationship among variables, the mediation analysis with bootstrapping clearly showed the role of catastrophizing in the mediation between visual analogue pain scale for back pain/leg pain, Oswestry Disability Index, and the level of education. This study demonstrated that lower educational attainment was associated with increased pain intensity and disability in patients with LSS, which was mediated by the coping mechanism, catastrophizing.
Predictors of verb-mediated anticipatory eye movements in the visual world.
Hintz, Florian; Meyer, Antje S; Huettig, Falk
2017-09-01
Many studies have demonstrated that listeners use information extracted from verbs to guide anticipatory eye movements to objects in the visual context that satisfy the selection restrictions of the verb. An important question is what underlies such verb-mediated anticipatory eye gaze. Based on empirical and theoretical suggestions, we investigated the influence of 5 potential predictors of this behavior: functional associations and general associations between verb and target object, as well as the listeners' production fluency, receptive vocabulary knowledge, and nonverbal intelligence. In 3 eye-tracking experiments, participants looked at sets of 4 objects and listened to sentences where the final word was predictable or not predictable (e.g., "The man peels/draws an apple"). On predictable trials only the target object, but not the distractors, were functionally and associatively related to the verb. In Experiments 1 and 2, objects were presented before the verb was heard. In Experiment 3, participants were given a short preview of the display after the verb was heard. Functional associations and receptive vocabulary were found to be important predictors of verb-mediated anticipatory eye gaze independent of the amount of contextual visual input. General word associations did not and nonverbal intelligence was only a very weak predictor of anticipatory eye movements. Participants' production fluency correlated positively with the likelihood of anticipatory eye movements when participants were given the long but not the short visual display preview. These findings fit best with a pluralistic approach to predictive language processing in which multiple mechanisms, mediating factors, and situational context dynamically interact. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Neural responses to salient visual stimuli.
Morris, J S; Friston, K J; Dolan, R J
1997-01-01
The neural mechanisms involved in the selective processing of salient or behaviourally important stimuli are uncertain. We used an aversive conditioning paradigm in human volunteer subjects to manipulate the salience of visual stimuli (emotionally expressive faces) presented during positron emission tomography (PET) neuroimaging. Increases in salience, and conflicts between the innate and acquired value of the stimuli, produced augmented activation of the pulvinar nucleus of the right thalamus. Furthermore, this pulvinar activity correlated positively with responses in structures hypothesized to mediate value in the brain right amygdala and basal forebrain (including the cholinergic nucleus basalis of Meynert). The results provide evidence that the pulvinar nucleus of the thalamus plays a crucial modulatory role in selective visual processing, and that changes in perceptual salience are mediated by value-dependent plasticity in pulvinar responses. PMID:9178546
ERIC Educational Resources Information Center
Berge, Sigrid Slettebakk; Thomassen, Gøril
2016-01-01
This article highlights interpreter-mediated learning situations for deaf high school students where such mediated artifacts as technical machines, models, and computer graphics are used by the teacher to illustrate his or her teaching. In these situations, the teacher's situated gestures and utterances, and the artifacts will contribute…
Longitudinal Mediators of Achievement in Mathematics and Reading in Typical and Atypical Development
Barnes, Marcia A.; Raghubar, Kimberly P.; English, Lianne; Williams, Jeffrey M.; Taylor, Heather; Landry, Susan
2014-01-01
Longitudinal studies of neurodevelopmental disorders that are diagnosed at or before birth and which are associated with specific learning difficulties at school-age provide one method for investigating developmental precursors of later-emerging academic disabilities. Spina bifida myelomeningocele (SBM) is a neurodevelopmental disorder associated with particular problems in mathematics, in contrast to well-developed word reading. Children with SBM (n = 30) and typically developing children (n = 35) were used to determine whether cognitive abilities measured at 36 and 60 months of age mediated the effect of group on mathematical and reading achievement outcomes at 8.5 and 9.5 years of age. A series of multiple mediator models showed that: visual-spatial working memory at 36 months and phonological awareness at 60 months partially mediated the effect of group on math calculations; phonological awareness partially mediated the effect of group on small addition and subtraction problems on a test of math fluency; and visual-spatial working memory mediated the effect of group on a test of math problem solving. Groups did not differ on word reading, and phonological awareness was the only mediator for reading fluency and reading comprehension. The findings are discussed with reference to theories of mathematical development and disability and with respect to both common and differing cognitive correlates of math and reading. PMID:24269579
Yang, Sunggu; Govindaiah, Gubbi; Lee, Sang-Hun; Yang, Sungchil; Cox, Charles L
2017-01-01
Thalamocortical neurons in the dorsal lateral geniculate nucleus (dLGN) transfer visual information from retina to primary visual cortex. This information is modulated by inhibitory input arising from local interneurons and thalamic reticular nucleus (TRN) neurons, leading to alterations of receptive field properties of thalamocortical neurons. Local GABAergic interneurons provide two distinct synaptic outputs: axonal (F1 terminals) and dendritic (F2 terminals) onto dLGN thalamocortical neurons. By contrast, TRN neurons provide only axonal output (F1 terminals) onto dLGN thalamocortical neurons. It is unclear if GABAA receptor-mediated currents originating from F1 and F2 terminals have different characteristics. In the present study, we examined multiple characteristics (rise time, slope, halfwidth and decay τ) of GABAA receptor-mediated miniature inhibitory postsynaptic synaptic currents (mIPSCs) originating from F1 and F2 terminals. The mIPSCs arising from F2 terminals showed slower kinetics relative to those from F1 terminals. Such differential kinetics of GABAAR-mediated responses could be an important role in temporal coding of visual signals.
An object-mediated updating account of insensitivity to transsaccadic change
Tas, A. Caglar; Moore, Cathleen M.; Hollingworth, Andrew
2012-01-01
Recent evidence has suggested that relatively precise information about the location and visual form of a saccade target object is retained across a saccade. However, this information appears to be available for report only when the target is removed briefly, so that the display is blank when the eyes land. We hypothesized that the availability of precise target information is dependent on whether a post-saccade object is mapped to the same object representation established for the presaccade target. If so, then the post-saccade features of the target overwrite the presaccade features, a process of object mediated updating in which visual masking is governed by object continuity. In two experiments, participants' sensitivity to the spatial displacement of a saccade target was improved when that object changed surface feature properties across the saccade, consistent with the prediction of the object-mediating updating account. Transsaccadic perception appears to depend on a mechanism of object-based masking that is observed across multiple domains of vision. In addition, the results demonstrate that surface-feature continuity contributes to visual stability across saccades. PMID:23092946
NASA Astrophysics Data System (ADS)
Haq, R.; Prayitno, H.; Dzulkiflih; Sucahyo, I.; Rahmawati, E.
2018-03-01
In this article, the development of a low cost mobile robot based on PID controller and odometer for education is presented. PID controller and odometer is applied for controlling mobile robot position. Two-dimensional position vector in cartesian coordinate system have been inserted to robot controller as an initial and final position. Mobile robot has been made based on differential drive and sensor magnetic rotary encoder which measured robot position from a number of wheel rotation. Odometry methode use data from actuator movements for predicting change of position over time. The mobile robot is examined to get final position with three different heading angle 30°, 45° and 60° by applying various value of KP, KD and KI constant.
ERIC Educational Resources Information Center
Roberson, Debi; Pak, Hyensou; Hanley, J. Richard
2008-01-01
In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is…
Visual Thinking and Gender Differences in High School Calculus
ERIC Educational Resources Information Center
Haciomeroglu, Erhan Selcuk; Chicken, Eric
2012-01-01
This study sought to examine calculus students' mathematical performances and preferences for visual or analytic thinking regarding derivative and antiderivative tasks presented graphically. It extends previous studies by investigating factors mediating calculus students' mathematical performances and their preferred modes of thinking. Data were…
Dynamic Interactions for Network Visualization and Simulation
2009-03-01
projects.htm, Site accessed January 5, 2009. 12. John S. Weir, Major, USAF, Mediated User-Simulator Interactive Command with Visualization ( MUSIC -V). Master’s...Computing Sciences in Colleges, December 2005). 14. Enrique Campos -Nanez, “nscript user manual,” Department of System Engineer- ing University of
Language-Mediated Visual Orienting Behavior in Low and High Literates
Huettig, Falk; Singh, Niharika; Mishra, Ramesh Kumar
2011-01-01
The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., “magar,” crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., “matar,” peas; a semantic competitor, e.g., “kachuwa,” turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze toward phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were not present (Experiment 2) but in contrast to high literates these phonologically mediated shifts in eye gaze were not closely time-locked to the speech input. These data provide further evidence that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word–object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts. PMID:22059083
ERIC Educational Resources Information Center
Malenfant, Nathalie; Grondin, Simon; Boivin, Michel; Forget-Dubois, Nadine; Robaey, Philippe; Dionne, Ginette
2012-01-01
This study tested whether the association between temporal processing (TP) and reading is mediated by phonological awareness (PA) in a normative sample of 615 eight-year-olds. TP was measured with auditory and bimodal (visual-auditory) temporal order judgment tasks and PA with a phoneme deletion task. PA partially mediated the association between…
2013-10-01
Evaluation of Novel Polyunsaturated Fatty Acid Derived Lipid Mediators 5a. CONTRACT NUMBER of Inflammation to Ameliorate the Deleterious Effects...studies have not been carried out as yet. Our hypothesis is that novel polyunsaturated fatty acid derived lipid mediators of inflammation, i.e., lipoxins
Levels of Syntactic Realization in Oral Reading.
ERIC Educational Resources Information Center
Brown, Eric
Two contrasting theories of reading are reviewed in light of recent research in psycholinguistics. A strictly "visual" model of fluent reading is contrasted with several mediational theories where auditory or articulatory coding is deemed necessary for comprehension. Surveying the research in visual information processing, oral reading,…
Color selectivity of the spatial congruency effect: evidence from the focused attention paradigm.
Makovac, Elena; Gerbino, Walter
2014-01-01
The multisensory response enhancement (MRE), occurring when the response to a visual target integrated with a spatially congruent sound is stronger than the response to the visual target alone, is believed to be mediated by the superior colliculus (SC) (Stein & Meredith, 1993). Here, we used a focused attention paradigm to show that the spatial congruency effect occurs with red (SC-effective) but not blue (SC-ineffective) visual stimuli, when presented with spatially congruent sounds. To isolate the chromatic component of SC-ineffective targets and to demonstrate the selectivity of the spatial congruency effect we used the random luminance modulation technique (Experiment 1) and the tritanopic technique (Experiment 2). Our results indicate that the spatial congruency effect does not require the distribution of attention over different sensory modalities and provide correlational evidence that the SC mediates the effect.
Visual Sensitivity of Deepwater Fishes in Lake Superior
Harrington, Kelly A.; Hrabik, Thomas R.; Mensinger, Allen F.
2015-01-01
The predator-prey interactions in the offshore food web of Lake Superior have been well documented, but the sensory systems mediating these interactions remain unknown. The deepwater sculpin, (Myoxocephalus thompsoni), siscowet (Salvelinus namaycush siscowet), and kiyi (Coregonus kiyi) inhabit low light level environments. To investigate the potential role of vision in predator-prey interactions, electroretinography was used to determine visual sensitivity for each species. Spectral sensitivity curves revealed peak sensitivity at 525 nm for each species which closely corresponds to the prevalent downwelling light spectrum at depth. To determine if sufficient light was available to mediate predator-prey interactions, visual sensitivity was correlated with the intensity of downwelling light in Lake Superior to construct visual depth profiles for each species. Sufficient daytime irradiance exists for visual interactions to approximately 325 m for siscowet and kiyi and 355 m for the deepwater sculpin during summer months. Under full moon conditions, sufficient irradiance exists to elicit ERG response to light available at approximately 30 m for the siscowet and kiyi and 45 m for the deepwater sculpin. Visual interactions are therefore possible at the depths and times when these organisms overlap in the water column indicating that vision may play a far greater role at depth in deep freshwater lakes than had been previously documented. PMID:25646781
Integrating mechanisms of visual guidance in naturalistic language production.
Coco, Moreno I; Keller, Frank
2015-05-01
Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.
Quentin, Romain; Elkin Frankston, Seth; Vernet, Marine; Toba, Monica N.; Bartolomeo, Paolo; Chanes, Lorena; Valero-Cabré, Antoni
2016-01-01
Behavioral and electrophysiological studies in humans and non-human primates have correlated frontal high-beta activity with the orienting of endogenous attention and shown the ability of the latter function to modulate visual performance. We here combined rhythmic transcranial magnetic stimulation (TMS) and diffusion imaging to study the relation between frontal oscillatory activity and visual performance, and we associated these phenomena to a specific set of white matter pathways that in humans subtend attentional processes. High-beta rhythmic activity on the right frontal eye field (FEF) was induced with TMS and its causal effects on a contrast sensitivity function were recorded to explore its ability to improve visual detection performance across different stimulus contrast levels. Our results show that frequency-specific activity patterns engaged in the right FEF have the ability to induce a leftward shift of the psychometric function. This increase in visual performance across different levels of stimulus contrast is likely mediated by a contrast gain mechanism. Interestingly, microstructural measures of white matter connectivity suggest a strong implication of right fronto-parietal connectivity linking the FEF and the intraparietal sulcus in propagating high-beta rhythmic signals across brain networks and subtending top-down frontal influences on visual performance. PMID:25899709
Dutca, Laura M; Stasheff, Steven F; Hedberg-Buenz, Adam; Rudd, Danielle S; Batra, Nikhil; Blodi, Frederick R; Yorek, Matthew S; Yin, Terry; Shankar, Malini; Herlein, Judith A; Naidoo, Jacinth; Morlock, Lorraine; Williams, Noelle; Kardon, Randy H; Anderson, Michael G; Pieper, Andrew A; Harper, Matthew M
2014-12-02
Traumatic brain injury (TBI) frequently leads to chronic visual dysfunction. The purpose of this study was to investigate the effect of TBI on retinal ganglion cells (RGCs), and to test whether treatment with the novel neuroprotective compound P7C3-S243 could prevent in vivo functional deficits in the visual system. Blast-mediated TBI was modeled using an enclosed over-pressure blast chamber. The RGC physiology was evaluated using a multielectrode array and pattern electroretinogram (PERG). Histological analysis of RGC dendritic field and cell number were evaluated at the end of the study. Visual outcome measures also were evaluated based on treatment of mice with P7C3-S243 or vehicle control. We show that deficits in neutral position PERG after blast-mediated TBI occur in a temporally bimodal fashion, with temporary recovery 4 weeks after injury followed by chronically persistent dysfunction 12 weeks later. This later time point is associated with development of dendritic abnormalities and irreversible death of RGCs. We also demonstrate that ongoing pathologic processes during the temporary recovery latent period (including abnormalities of RGC physiology) lead to future dysfunction of the visual system. We report that modification of PERG to provocative postural tilt testing elicits changes in PERG measurements that correlate with a key in vitro measures of damage: the spontaneous and light-evoked activity of RGCs. Treatment with P7C3-S243 immediately after injury and throughout the temporary recovery latent period protects mice from developing chronic visual system dysfunction. Provocative PERG testing serves as a noninvasive test in the living organism to identify early damage to the visual system, which may reflect corresponding damage in the brain that is not otherwise detectable by noninvasive means. This provides the basis for developing an earlier diagnostic test to identify patients at risk for developing chronic CNS and visual system damage after TBI at an earlier stage when treatments may be more effective in preventing these sequelae. In addition, treatment with the neuroprotective agent P7C3-S243 after TBI protects from visual system dysfunction after TBI. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Dutca, Laura M.; Stasheff, Steven F.; Hedberg-Buenz, Adam; Rudd, Danielle S.; Batra, Nikhil; Blodi, Frederick R.; Yorek, Matthew S.; Yin, Terry; Shankar, Malini; Herlein, Judith A.; Naidoo, Jacinth; Morlock, Lorraine; Williams, Noelle; Kardon, Randy H.; Anderson, Michael G.; Pieper, Andrew A.; Harper, Matthew M.
2014-01-01
Purpose. Traumatic brain injury (TBI) frequently leads to chronic visual dysfunction. The purpose of this study was to investigate the effect of TBI on retinal ganglion cells (RGCs), and to test whether treatment with the novel neuroprotective compound P7C3-S243 could prevent in vivo functional deficits in the visual system. Methods. Blast-mediated TBI was modeled using an enclosed over-pressure blast chamber. The RGC physiology was evaluated using a multielectrode array and pattern electroretinogram (PERG). Histological analysis of RGC dendritic field and cell number were evaluated at the end of the study. Visual outcome measures also were evaluated based on treatment of mice with P7C3-S243 or vehicle control. Results. We show that deficits in neutral position PERG after blast-mediated TBI occur in a temporally bimodal fashion, with temporary recovery 4 weeks after injury followed by chronically persistent dysfunction 12 weeks later. This later time point is associated with development of dendritic abnormalities and irreversible death of RGCs. We also demonstrate that ongoing pathologic processes during the temporary recovery latent period (including abnormalities of RGC physiology) lead to future dysfunction of the visual system. We report that modification of PERG to provocative postural tilt testing elicits changes in PERG measurements that correlate with a key in vitro measures of damage: the spontaneous and light-evoked activity of RGCs. Treatment with P7C3-S243 immediately after injury and throughout the temporary recovery latent period protects mice from developing chronic visual system dysfunction. Conclusions. Provocative PERG testing serves as a noninvasive test in the living organism to identify early damage to the visual system, which may reflect corresponding damage in the brain that is not otherwise detectable by noninvasive means. This provides the basis for developing an earlier diagnostic test to identify patients at risk for developing chronic CNS and visual system damage after TBI at an earlier stage when treatments may be more effective in preventing these sequelae. In addition, treatment with the neuroprotective agent P7C3-S243 after TBI protects from visual system dysfunction after TBI. PMID:25468886
Visibility from roads predict the distribution of invasive fishes in agricultural ponds.
Kizuka, Toshikazu; Akasaka, Munemitsu; Kadoya, Taku; Takamura, Noriko
2014-01-01
Propagule pressure and habitat characteristics are important factors used to predict the distribution of invasive alien species. For species exhibiting strong propagule pressure because of human-mediated introduction of species, indicators of introduction potential must represent the behavioral characteristics of humans. This study examined 64 agricultural ponds to assess the visibility of ponds from surrounding roads and its value as a surrogate of propagule pressure to explain the presence and absence of two invasive fish species. A three-dimensional viewshed analysis using a geographic information system quantified the visual exposure of respective ponds to humans. Binary classification trees were developed as a function of their visibility from roads, as well as five environmental factors: river density, connectivity with upstream dam reservoirs, pond area, chlorophyll a concentration, and pond drainage. Traditional indicators of human-mediated introduction (road density and proportion of urban land-use area) were alternatively included for comparison instead of visual exposure. The presence of Bluegill (Lepomis macrochirus) was predicted by the ponds' higher visibility from roads and pond connection with upstream dam reservoirs. Results suggest that fish stocking into ponds and their dispersal from upstream sources facilitated species establishment. Largemouth bass (Micropterus salmoides) distribution was constrained by chlorophyll a concentration, suggesting their lower adaptability to various environments than that of Bluegill. Based on misclassifications from classification trees for Bluegill, pond visual exposure to roads showed greater predictive capability than traditional indicators of human-mediated introduction. Pond visibility is an effective predictor of invasive species distribution. Its wider use might improve management and mitigate further invasion. The visual exposure of recipient ecosystems to humans is important for many invasive species that spread with frequent instances of human-mediated introduction.
Raghubar, Kimberly P.; Barnes, Marcia A.; Dennis, Maureen; Cirino, Paul T.; Taylor, Heather; Landry, Susan
2015-01-01
Objective Math and attention are related in neurobiological and behavioral models of mathematical cognition. This study employed model-driven assessments of attention and math in children with spina bifida myelomeningocele (SBM), who have known math difficulties and specific attentional deficits, to more directly examine putative relations between attention and mathematical processing. The relation of other domain general abilities and math was also investigated. Method Participants were 9.5-year-old children with SBM (N = 44) and typically developing children (N = 50). Participants were administered experimental exact and approximate arithmetic tasks, and standardized measures of math fluency and calculation. Cognitive measures included the Attention Network Test (ANT), and standardized measures of fine motor skills, verbal working memory (WM), and visual-spatial WM. Results Children with SBM performed similarly to peers on exact arithmetic but more poorly on approximate and standardized arithmetic measures. On the ANT, children with SBM differed from controls on orienting attention but not alerting and executive attention. Multiple mediation models showed that: fine motor skills and verbal WM mediated the relation of group to approximate arithmetic; fine motor skills and visual-spatial WM mediated the relation of group to math fluency; and verbal and visual-spatial WM mediated the relation of group to math calculation. Attention was not a significant mediator of the effects of group for any aspect of math in this study. Conclusions Results are discussed with reference to models of attention, WM, and mathematical cognition. PMID:26011113
Raghubar, Kimberly P; Barnes, Marcia A; Dennis, Maureen; Cirino, Paul T; Taylor, Heather; Landry, Susan
2015-11-01
Math and attention are related in neurobiological and behavioral models of mathematical cognition. This study employed model-driven assessments of attention and math in children with spina bifida myelomeningocele (SBM), who have known math difficulties and specific attentional deficits, to more directly examine putative relations between attention and mathematical processing. The relation of other domain general abilities and math was also investigated. Participants were 9.5-year-old children with SBM (n = 44) and typically developing children (n = 50). Participants were administered experimental exact and approximate arithmetic tasks, and standardized measures of math fluency and calculation. Cognitive measures included the Attention Network Test (ANT), and standardized measures of fine motor skills, verbal working memory (WM), and visual-spatial WM. Children with SBM performed similarly to peers on exact arithmetic, but more poorly on approximate and standardized arithmetic measures. On the ANT, children with SBM differed from controls on orienting attention, but not on alerting and executive attention. Multiple mediation models showed that fine motor skills and verbal WM mediated the relation of group to approximate arithmetic; fine motor skills and visual-spatial WM mediated the relation of group to math fluency; and verbal and visual-spatial WM mediated the relation of group to math calculation. Attention was not a significant mediator of the effects of group for any aspect of math in this study. Results are discussed with reference to models of attention, WM, and mathematical cognition. (c) 2015 APA, all rights reserved).
The Iconography of Universities as Institutional Narratives
ERIC Educational Resources Information Center
Drori, Gili S.; Delmestri, Giuseppe; Oberg, Achim
2016-01-01
The coming of "brand society" and the onset of mediatization spur universities to strategize their visual identity and pay particular attention to their icon. Resulting from branding initiatives, university icons are visual self-representations and material-cum-symbolic forms of organizational identity. In this work we ask: What identity…
ERIC Educational Resources Information Center
McLinden, M.
2012-01-01
This article provides a synthesis of literature pertaining to the development of haptic exploratory strategies in children who have visual impairment and intellectual disabilities. The information received through such strategies assumes particular significance for these children, given the restricted information available through their visual…
Art Historical Appropriation in a Visual Culture-Based Art Education
ERIC Educational Resources Information Center
Trafi-Prats, Laura
2009-01-01
Critical art histories have strategically contributed to the constitution of visual culture studies as an interdisciplinary field that interprets the mediations of mass-produced imagery in contemporary culture. This article advocates for an anti-historicist perspective of art historical knowledge connected to cultural analysis and centered on the…
Innovative Didactic Designs: Visual Analytics and Visual Literacy in School
ERIC Educational Resources Information Center
Stenliden, Linnéa; Nissen, Jörgen; Bodén, Ulrika
2017-01-01
In a world of massively mediated information and communication, students must learn to handle rapidly growing information volumes inside and outside school. Pedagogy attuned to processing this growing production and communication of information is needed. However, ordinary educational models often fail to support students, trialing neither…
Use Patterns of Visual Cues in Computer-Mediated Communication
ERIC Educational Resources Information Center
Bolliger, Doris U.
2009-01-01
Communication in the virtual environment can be challenging for participants because it lacks physical presence and nonverbal elements. Participants may have difficulties expressing their intentions and emotions in a primarily text-based course. Therefore, the use of visual communication elements such as pictographic and typographic marks can be…
Gallivan, Jason P; Goodale, Melvyn A
2018-01-01
In 1992, Goodale and Milner proposed a division of labor in the visual pathways of the primate cerebral cortex. According to their account, the ventral pathway, which projects to occipitotemporal cortex, constructs our visual percepts, while the dorsal pathway, which projects to posterior parietal cortex, mediates the visual control of action. Although the framing of the two-visual-system hypothesis has not been without controversy, it is clear that vision for action and vision for perception have distinct computational requirements, and significant support for the proposed neuroanatomic division has continued to emerge over the last two decades from human neuropsychology, neuroimaging, behavioral psychophysics, and monkey neurophysiology. In this chapter, we review much of this evidence, with a particular focus on recent findings from human neuroimaging and monkey neurophysiology, demonstrating a specialized role for parietal cortex in visually guided behavior. But even though the available evidence suggests that dedicated circuits mediate action and perception, in order to produce adaptive goal-directed behavior there must be a close coupling and seamless integration of information processing across these two systems. We discuss such ventral-dorsal-stream interactions and argue that the two pathways play different, yet complementary, roles in the production of skilled behavior. Copyright © 2018 Elsevier B.V. All rights reserved.
On avoiding framing effects in experienced decision makers.
Garcia-Retamero, Rocio; Dhami, Mandeep K
2013-01-01
The present study aimed to (a) demonstrate the effect of positive-negative framing on experienced criminal justice decision makers, (b) examine the debiasing effect of visually structured risk messages, and (c) investigate whether risk perceptions mediate the debiasing effect of visual aids on decision making. In two phases, 60 senior police officers estimated the accuracy of a counterterrorism technique in identifying whether a known terror suspect poses an imminent danger and decided whether they would recommend the technique to policy makers. Officers also rated their confidence in this recommendation. When information about the effectiveness of the counterterrorism technique was presented in a numerical format, officers' perceptions of accuracy and recommendation decisions were susceptible to the framing effect: The technique was perceived to be more accurate and was more likely to be recommended when its effectiveness was presented in a positive than in a negative frame. However, when the information was represented visually using icon arrays, there were no such framing effects. Finally, perceptions of accuracy mediated the debiasing effect of visual aids on recommendation decisions. We offer potential explanations for the debiasing effect of visual aids and implications for communicating risk to experienced, professional decision makers.
Memel, Molly; Ryan, Lee
2017-06-01
The ability to remember associations between previously unrelated pieces of information is often impaired in older adults (Naveh-Benjamin, 2000). Unitization, the process of creating a perceptually or semantically integrated representation that includes both items in an associative pair, attenuates age-related associative deficits (Bastin et al., 2013; Ahmad et al., 2015; Zheng et al., 2015). Compared to non-unitized pairs, unitized pairs may rely less on hippocampally-mediated binding associated with recollection, and more on familiarity-based processes mediated by perirhinal cortex (PRC) and parahippocampal cortex (PHC). While unitization of verbal materials improves associative memory in older adults, less is known about the impact of visual integration. The present study determined whether visual integration improves associative memory in older adults by minimizing the need for hippocampal (HC) recruitment and shifting encoding to non-hippocampal medial temporal structures, such as the PRC and PHC. Young and older adults were presented with a series of objects paired with naturalistic scenes while undergoing fMRI scanning, and were later given an associative memory test. Visual integration was varied by presenting the object either next to the scene (Separated condition) or visually integrated within the scene (Combined condition). Visual integration improved associative memory among young and older adults to a similar degree by increasing the hit rate for intact pairs, but without increasing false alarms for recombined pairs, suggesting enhanced recollection rather than increased reliance on familiarity. Also contrary to expectations, visual integration resulted in increased hippocampal activation in both age groups, along with increases in PRC and PHC activation. Activation in all three MTL regions predicted discrimination performance during the Separated condition in young adults, while only a marginal relationship between PRC activation and performance was observed during the Combined condition. Older adults showed less overall activation in MTL regions compared to young adults, and associative memory performance was most strongly predicted by prefrontal, rather than MTL, activation. We suggest that visual integration benefits both young and older adults similarly, and provides a special case of unitization that may be mediated by recollective, rather than familiarity-based encoding processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Barnes, Marcia A; Raghubar, Kimberly P; English, Lianne; Williams, Jeffrey M; Taylor, Heather; Landry, Susan
2014-03-01
Longitudinal studies of neurodevelopmental disorders that are diagnosed at or before birth and are associated with specific learning difficulties at school-age provide one method for investigating developmental precursors of later-emerging academic disabilities. Spina bifida myelomeningocele (SBM) is a neurodevelopmental disorder associated with particular problems in mathematics, in contrast to well-developed word reading. Children with SBM (n=30) and typically developing children (n=35) were used to determine whether cognitive abilities measured at 36 and 60 months of age mediated the effect of group on mathematical and reading achievement outcomes at 8.5 and 9.5 years of age. A series of multiple mediator models showed that: visual-spatial working memory at 36 months and phonological awareness at 60 months partially mediated the effect of group on math calculations, phonological awareness partially mediated the effect of group on small addition and subtraction problems on a test of math fluency, and visual-spatial working memory mediated the effect of group on a test of math problem solving. Groups did not differ on word reading, and phonological awareness was the only mediator for reading fluency and reading comprehension. The findings are discussed with reference to theories of mathematical development and disability and with respect to both common and differing cognitive correlates of math and reading. Copyright © 2013 Elsevier Inc. All rights reserved.
Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro
2016-10-01
Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Camera Perspective Bias in Videotaped Confessions: Evidence that Visual Attention Is a Mediator
ERIC Educational Resources Information Center
Ware, Lezlee J.; Lassiter, G. Daniel; Patterson, Stephen M.; Ransom, Michael R.
2008-01-01
Several experiments have demonstrated a "camera perspective bias" in evaluations of videotaped confessions: videotapes with the camera focused on the suspect lead to judgments of greater voluntariness than alternative presentation formats. The present research investigated potential mediators of this bias. Using eye tracking to measure visual…
The Mediated Classroom: A Systems Approach to Better University Instruction.
ERIC Educational Resources Information Center
Ranker, Richard A.
1995-01-01
Describes the design and equipment configuration of four mediated classrooms installed at a small university. Topics include audio, visual, and environmental subsystems; the teaching workstation; integration into learning, including teaching faculty how to use it and providing support services; and an instructional technology integration model.…
How Useful Is Braille Music?: A Critical Review
ERIC Educational Resources Information Center
Park, Hyu-Yong
2015-01-01
This article discusses the usefulness of Braille music as a mediational means for musicians with visual impairment (MVI). Specifically, three broad issues are the focus of this study: (1) three notions as the conceptual frameworks, namely, mediation, appropriation and mastery; (2) three criteria of the usefulness of Braille music, including…
Park, Jason C.; McAnany, J. Jason
2015-01-01
This study determined if the pupillary light reflex (PLR) driven by brief stimulus presentations can be accounted for by the product of stimulus luminance and area (i.e., corneal flux density, CFD) under conditions biased toward the rod, cone, and melanopsin pathways. Five visually normal subjects participated in the study. Stimuli consisted of 1-s short- and long-wavelength flashes that spanned a large range of luminance and angular subtense. The stimuli were presented in the central visual field in the dark (rod and melanopsin conditions) and against a rod-suppressing short-wavelength background (cone condition). Rod- and cone-mediated PLRs were measured at the maximum constriction after stimulus onset whereas the melanopsin-mediated PLR was measured 5–7 s after stimulus offset. The rod- and melanopsin-mediated PLRs were well accounted for by CFD, such that doubling the stimulus luminance had the same effect on the PLR as doubling the stimulus area. Melanopsin-mediated PLRs were elicited only by short-wavelength, large (>16°) stimuli with luminance greater than 10 cd/m2, but when present, the melanopsin-mediated PLR was well accounted for by CFD. In contrast, CFD could not account for the cone-mediated PLR because the PLR was approximately independent of stimulus size but strongly dependent on stimulus luminance. These findings highlight important differences in how stimulus luminance and size combine to govern the PLR elicited by brief flashes under rod-, cone-, and melanopsin-mediated conditions. PMID:25788707
Voorhees, Jaymie R.; Genova, Rachel M.; Britt, Jeremiah K.; McDaniel, Latisha; Harper, Matthew M.
2016-01-01
Abstract Axonal degeneration is a prominent feature of many forms of neurodegeneration, and also an early event in blast-mediated traumatic brain injury (TBI), the signature injury of soldiers in Iraq and Afghanistan. It is not known, however, whether this axonal degeneration is what drives development of subsequent neurologic deficits after the injury. The Wallerian degeneration slow strain (WldS) of mice is resistant to some forms of axonal degeneration because of a triplicated fusion gene encoding the first 70 amino acids of Ufd2a, a ubiquitin-chain assembly factor, that is linked to the complete coding sequence of nicotinamide mononucleotide adenylyltransferase 1 (NMAT1). Here, we demonstrate that resistance of WldS mice to axonal degeneration after blast-mediated TBI is associated with preserved function in hippocampal-dependent spatial memory, cerebellar-dependent motor balance, and retinal and optic nerve–dependent visual function. Thus, early axonal degeneration is likely a critical driver of subsequent neurobehavioral complications of blast-mediated TBI. Future therapeutic strategies targeted specifically at mitigating axonal degeneration may provide a uniquely beneficial approach to treating patients suffering from the effects of blast-mediated TBI. PMID:27822499
Yin, Terry C; Voorhees, Jaymie R; Genova, Rachel M; Davis, Kevin C; Madison, Ashley M; Britt, Jeremiah K; Cintrón-Pérez, Coral J; McDaniel, Latisha; Harper, Matthew M; Pieper, Andrew A
2016-01-01
Axonal degeneration is a prominent feature of many forms of neurodegeneration, and also an early event in blast-mediated traumatic brain injury (TBI), the signature injury of soldiers in Iraq and Afghanistan. It is not known, however, whether this axonal degeneration is what drives development of subsequent neurologic deficits after the injury. The Wallerian degeneration slow strain ( WldS ) of mice is resistant to some forms of axonal degeneration because of a triplicated fusion gene encoding the first 70 amino acids of Ufd2a, a ubiquitin-chain assembly factor, that is linked to the complete coding sequence of nicotinamide mononucleotide adenylyltransferase 1 (NMAT1). Here, we demonstrate that resistance of WldS mice to axonal degeneration after blast-mediated TBI is associated with preserved function in hippocampal-dependent spatial memory, cerebellar-dependent motor balance, and retinal and optic nerve-dependent visual function. Thus, early axonal degeneration is likely a critical driver of subsequent neurobehavioral complications of blast-mediated TBI. Future therapeutic strategies targeted specifically at mitigating axonal degeneration may provide a uniquely beneficial approach to treating patients suffering from the effects of blast-mediated TBI.
Huettig, Falk; Altmann, Gerry T M
2011-01-01
Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.
Optic flow odometry operates independently of stride integration in carried ants.
Pfeffer, Sarah E; Wittlinger, Matthias
2016-09-09
Cataglyphis desert ants are impressive navigators. When the foragers roam the desert, they employ path integration. For these ants, distance estimation is one key challenge. Distance information was thought to be provided by optic flow (OF)-that is, image motion experienced during travel-but this idea was abandoned when stride integration was discovered as an odometer mechanism in ants. We show that ants transported by nest mates are capable of measuring travel distance exclusively by the use of OF cues. Furthermore, we demonstrate that the information gained from the optic flowmeter cannot be transferred to the stride integrator. Our results suggest a dual information channel that allows the ants to measure distances by strides and OF cues, although both systems operate independently and in a redundant manner. Copyright © 2016, American Association for the Advancement of Science.
Tick, David; Satici, Aykut C; Shen, Jinglin; Gans, Nicholas
2013-08-01
This paper presents a novel navigation and control system for autonomous mobile robots that includes path planning, localization, and control. A unique vision-based pose and velocity estimation scheme utilizing both the continuous and discrete forms of the Euclidean homography matrix is fused with inertial and optical encoder measurements to estimate the pose, orientation, and velocity of the robot and ensure accurate localization and control signals. A depth estimation system is integrated in order to overcome the loss of scale inherent in vision-based estimation. A path following control system is introduced that is capable of guiding the robot along a designated curve. Stability analysis is provided for the control system and experimental results are presented that prove the combined localization and control system performs with high accuracy.
Teacher Guidance to Mediate Student Inquiry through Interactive Dynamic Visualizations
ERIC Educational Resources Information Center
Chang, Hsin-Yi
2013-01-01
The purpose of this study is to investigate how three teachers guided their students to learn science using interactive dynamic visualizations incorporated in an inquiry digital unit. The results show that the teachers' guidance varied in frequency, occasion, and content type. Each teacher demonstrated a different instructional approach in…
ERIC Educational Resources Information Center
Ram-Tsur, Ronit; Faust, Miriam; Zivotofsky, Ari Z.
2008-01-01
The present study investigates the performance of persons with reading disabilities (PRD) on a variety of sequential visual-comparison tasks that have different working-memory requirements. In addition, mediating relationships between the sequential comparison process and attention and memory skills were looked for. Our findings suggest that PRD…
ERIC Educational Resources Information Center
Huettig, Falk; McQueen, James M.
2007-01-01
Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with "beker," "beaker," for example, the display contained phonological (a beaver, "bever"), shape (a…
Readers Building Fictional Worlds: Visual Representations, Poetry and Cognition
ERIC Educational Resources Information Center
Giovanelli, Marcello
2017-01-01
This article explores the complex nature of the literature classroom by drawing on the cognitive linguistic framework "Text World Theory" to examine the teacher's role as facilitator and mediator of reading. Specifically, the article looks at how one teacher used visual representations as a way of allowing students to engage in a more…
Visual artistic creativity and the brain.
Heilman, Kenneth M; Acosta, Lealani Mae
2013-01-01
Creativity is the development of a new or novel understanding--insight that leads to the expression of orderly relationships (e.g., finding and revealing the thread that unites). Visual artistic creativity plays an important role in the quality of human lives, and the goal of this chapter is to describe some of the brain mechanisms that may be important in visual artistic creativity. The initial major means of learning how the brain mediates any activity is to understand the anatomy and physiology that may support these processes. A further understanding of specific cognitive activities and behaviors may be gained by studying patients who have diseases of the brain and how these diseases influence these functions. Physiological recording such as electroencephalography and brain imaging techniques such as PET and fMRI have also allowed us to gain a better understanding of the brain mechanisms important in visual creativity. In this chapter, we discuss anatomic and physiological studies, as well as neuropsychological studies of healthy artists and patients with neurological disease that have helped us gain some insight into the brain mechanisms that mediate artistic creativity. © 2013 Elsevier B.V. All rights reserved.
Top-down alpha oscillatory network interactions during visuospatial attention orienting.
Doesburg, Sam M; Bedo, Nicolas; Ward, Lawrence M
2016-05-15
Neuroimaging and lesion studies indicate that visual attention is controlled by a distributed network of brain areas. The covert control of visuospatial attention has also been associated with retinotopic modulation of alpha-band oscillations within early visual cortex, which are thought to underlie inhibition of ignored areas of visual space. The relation between distributed networks mediating attention control and more focal oscillatory mechanisms, however, remains unclear. The present study evaluated the hypothesis that alpha-band, directed, network interactions within the attention control network are systematically modulated by the locus of visuospatial attention. We localized brain areas involved in visuospatial attention orienting using magnetoencephalographic (MEG) imaging and investigated alpha-band Granger-causal interactions among activated regions using narrow-band transfer entropy. The deployment of attention to one side of visual space was indexed by lateralization of alpha power changes between about 400ms and 700ms post-cue onset. The changes in alpha power were associated, in the same time period, with lateralization of anterior-to-posterior information flow in the alpha-band from various brain areas involved in attention control, including the anterior cingulate cortex, left middle and inferior frontal gyri, left superior temporal gyrus, and right insula, and inferior parietal lobule, to early visual areas. We interpreted these results to indicate that distributed network interactions mediated by alpha oscillations exert top-down influences on early visual cortex to modulate inhibition of processing for ignored areas of visual space. Copyright © 2016. Published by Elsevier Inc.
Toomey, Matthew B.; McGraw, Kevin J.
2011-01-01
Background For many bird species, vision is the primary sensory modality used to locate and assess food items. The health and spectral sensitivities of the avian visual system are influenced by diet-derived carotenoid pigments that accumulate in the retina. Among wild House Finches (Carpodacus mexicanus), we have found that retinal carotenoid accumulation varies significantly among individuals and is related to dietary carotenoid intake. If diet-induced changes in retinal carotenoid accumulation alter spectral sensitivity, then they have the potential to affect visually mediated foraging performance. Methodology/Principal Findings In two experiments, we measured foraging performance of house finches with dietarily manipulated retinal carotenoid levels. We tested each bird's ability to extract visually contrasting food items from a matrix of inedible distracters under high-contrast (full) and dimmer low-contrast (red-filtered) lighting conditions. In experiment one, zeaxanthin-supplemented birds had significantly increased retinal carotenoid levels, but declined in foraging performance in the high-contrast condition relative to astaxanthin-supplemented birds that showed no change in retinal carotenoid accumulation. In experiments one and two combined, we found that retinal carotenoid concentrations predicted relative foraging performance in the low- vs. high-contrast light conditions in a curvilinear pattern. Performance was positively correlated with retinal carotenoid accumulation among birds with low to medium levels of accumulation (∼0.5–1.5 µg/retina), but declined among birds with very high levels (>2.0 µg/retina). Conclusion/Significance Our results suggest that carotenoid-mediated spectral filtering enhances color discrimination, but that this improvement is traded off against a reduction in sensitivity that can compromise visual discrimination. Thus, retinal carotenoid levels may be optimized to meet the visual demands of specific behavioral tasks and light environments. PMID:21747917
Top-Down Beta Enhances Bottom-Up Gamma
Thompson, William H.
2017-01-01
Several recent studies have demonstrated that the bottom-up signaling of a visual stimulus is subserved by interareal gamma-band synchronization, whereas top-down influences are mediated by alpha-beta band synchronization. These processes may implement top-down control of stimulus processing if top-down and bottom-up mediating rhythms are coupled via cross-frequency interaction. To test this possibility, we investigated Granger-causal influences among awake macaque primary visual area V1, higher visual area V4, and parietal control area 7a during attentional task performance. Top-down 7a-to-V1 beta-band influences enhanced visually driven V1-to-V4 gamma-band influences. This enhancement was spatially specific and largest when beta-band activity preceded gamma-band activity by ∼0.1 s, suggesting a causal effect of top-down processes on bottom-up processes. We propose that this cross-frequency interaction mechanistically subserves the attentional control of stimulus selection. SIGNIFICANCE STATEMENT Contemporary research indicates that the alpha-beta frequency band underlies top-down control, whereas the gamma-band mediates bottom-up stimulus processing. This arrangement inspires an attractive hypothesis, which posits that top-down beta-band influences directly modulate bottom-up gamma band influences via cross-frequency interaction. We evaluate this hypothesis determining that beta-band top-down influences from parietal area 7a to visual area V1 are correlated with bottom-up gamma frequency influences from V1 to area V4, in a spatially specific manner, and that this correlation is maximal when top-down activity precedes bottom-up activity. These results show that for top-down processes such as spatial attention, elevated top-down beta-band influences directly enhance feedforward stimulus-induced gamma-band processing, leading to enhancement of the selected stimulus. PMID:28592697
The Role of Visual Processing Speed in Reading Speed Development
Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane
2013-01-01
A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children. PMID:23593117
The role of visual processing speed in reading speed development.
Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane
2013-01-01
A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children.
Freud, Erez; Ganel, Tzvi; Avidan, Galia; Gilaie-Dotan, Sharon
2016-03-01
According to the two visual systems model, the cortical visual system is segregated into a ventral pathway mediating object recognition, and a dorsal pathway mediating visuomotor control. In the present study we examined whether the visual control of action could develop normally even when visual perceptual abilities are compromised from early childhood onward. Using his fingers, LG, an individual with a rare developmental visual object agnosia, manually estimated (perceptual condition) the width of blocks that varied in width and length (but not in overall size), or simply picked them up across their width (grasping condition). LG's perceptual sensitivity to target width was profoundly impaired in the manual estimation task compared to matched controls. In contrast, the sensitivity to object shape during grasping, as measured by maximum grip aperture (MGA), the time to reach the MGA, the reaction time and the total movement time were all normal in LG. Further analysis, however, revealed that LG's sensitivity to object shape during grasping emerged at a later time stage during the movement compared to controls. Taken together, these results demonstrate a dissociation between action and perception of object shape, and also point to a distinction between different stages of the grasping movement, namely planning versus online control. Moreover, the present study implies that visuomotor abilities can develop normally even when perceptual abilities developed in a profoundly impaired fashion. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shabbott, Britne A; Sainburg, Robert L
2010-05-01
Visuomotor adaptation is mediated by errors between intended and sensory-detected arm positions. However, it is not clear whether visual-based errors that are shown during the course of motion lead to qualitatively different or more efficient adaptation than errors shown after movement. For instance, continuous visual feedback mediates online error corrections, which may facilitate or inhibit the adaptation process. We addressed this question by manipulating the timing of visual error information and task instructions during a visuomotor adaptation task. Subjects were exposed to a visuomotor rotation, during which they received continuous visual feedback (CF) of hand position with instructions to correct or not correct online errors, or knowledge-of-results (KR), provided as a static hand-path at the end of each trial. Our results showed that all groups improved performance with practice, and that online error corrections were inconsequential to the adaptation process. However, in contrast to the CF groups, the KR group showed relatively small reductions in mean error with practice, increased inter-trial variability during rotation exposure, and more limited generalization across target distances and workspace. Further, although the KR group showed improved performance with practice, after-effects were minimal when the rotation was removed. These findings suggest that simultaneous visual and proprioceptive information is critical in altering neural representations of visuomotor maps, although delayed error information may elicit compensatory strategies to offset perturbations.
Nature and Nurture: the complex genetics of myopia and refractive error
Wojciechowski, Robert
2010-01-01
The refractive errors, myopia and hyperopia, are optical defects of the visual system that can cause blurred vision. Uncorrected refractive errors are the most common causes of visual impairment worldwide. It is estimated that 2.5 billion people will be affected by myopia alone with in the next decade. Experimental, epidemiological and clinical research has shown that refractive development is influenced by both environmental and genetic factors. Animal models have demonstrated that eye growth and refractive maturation during infancy are tightly regulated by visually-guided mechanisms. Observational data in human populations provide compelling evidence that environmental influences and individual behavioral factors play crucial roles in myopia susceptibility. Nevertheless, the majority of the variance of refractive error within populations is thought to be due to hereditary factors. Genetic linkage studies have mapped two dozen loci, while association studies have implicated more than 25 different genes in refractive variation. Many of these genes are involved in common biological pathways known to mediate extracellular matrix composition and regulate connective tissue remodeling. Other associated genomic regions suggest novel mechanisms in the etiology of human myopia, such as mitochondrial-mediated cell death or photoreceptor-mediated visual signal transmission. Taken together, observational and experimental studies have revealed the complex nature of human refractive variation, which likely involves variants in several genes and functional pathways. Multiway interactions between genes and/or environmental factors may also be important in determining individual risks of myopia, and may help explain the complex pattern of refractive error in human populations. PMID:21155761
Construction of Shared Knowledge in Face-to-Face and Computer-Mediated Cooperation.
ERIC Educational Resources Information Center
Fischer, Frank; Mandl, Heinz
This study examined how learners constructed and used shared knowledge in computer-mediated and face-to-face cooperative learning, investigating how to facilitate the construction and use of shared knowledge through dynamic visualization. Forty-eight college students were separated into dyads and assigned to one of four experimental conditions…
Mediating Hillary Rodham Clinton: Television News Practices and Image-Making in the Postmodern Age.
ERIC Educational Resources Information Center
Parry-Giles, Shawn J.
2000-01-01
Reviews stereotypes of Hillary Rodham Clinton (HRC) in television news. Investigates the significance to image-making of stereotypes, visual deconstruction and reconstruction, close-up shots and spectator positioning, as well as news recycling and repetition. Argues that such strategies reify a mediated collective memory of HRC which is…
Depressed Mood Mediates Decline in Cognitive Processing Speed in Caregivers
ERIC Educational Resources Information Center
Vitaliano, Peter P.; Zhang, Jianping; Young, Heather M.; Caswell, Lisa W.; Scanlan, James M.; Echeverria, Diana
2009-01-01
Purpose: Very few studies have examined cognitive decline in caregivers versus noncaregivers, and only 1 study has examined mediators of such decline. We evaluated the relationship between caregiver status and decline on the digit symbol test (DST; a measure of processing speed, attention, cognitive-motor translation, and visual scanning) and…
Redesigning the Human-Machine Interface for Computer-Mediated Visual Technologies.
ERIC Educational Resources Information Center
Acker, Stephen R.
1986-01-01
This study examined an application of a human machine interface which relies on the use of optical bar codes incorporated in a computer-based module to teach radio production. The sequencing procedure used establishes the user rather than the computer as the locus of control for the mediated instruction. (Author/MBR)
Oldenburg, Catherine E; Lalitha, Prajna; Srinivasan, Muthiah; Manikandan, Palanisamy; Bharathi, M Jayahar; Rajaraman, Revathi; Ravindran, Meenakshi; Mascarenhas, Jeena; Nardone, Natalie; Ray, Kathryn J; Glidden, David V; Acharya, Nisha R; Lietman, Thomas M
2013-02-28
Bacterial keratitis is a sight-threatening infection of the cornea that is one of the leading causes of blindness globally. In this report, we analyze the role of moxifloxacin susceptibility in the relationship between causative organisms and clinical outcome in bacteria keratitis. A mediation analysis is used to assess the role of moxifloxacin susceptibility in the relationship between causative organisms and clinical outcome in bacterial keratitis using data collected in a randomized, controlled trial. In the Steroids for Corneal Ulcers Trial (SCUT), 500 corneal infections were treated with topical moxifloxacin. The outcome of 3-week best spectacle-corrected visual acuity was significantly associated with an organism (Streptococcus pneumoniae, Pseudomonas aeruginosa, etc., P = 0.008). An indirect effects mediation model suggests that MIC accounted for approximately 13% (95% confidence interval, 3%-24%, P = 0.015) of the effect of the organism on 3-week visual acuity. Moxifloxacin mediates the relationship between causative organisms and clinical outcome in bacterial keratitis, and is likely on the causal pathway between the organism and outcome. (ClinicalTrials.gov number, NCT00324168.).
Oldenburg, Catherine E.; Lalitha, Prajna; Srinivasan, Muthiah; Manikandan, Palanisamy; Bharathi, M. Jayahar; Rajaraman, Revathi; Ravindran, Meenakshi; Mascarenhas, Jeena; Nardone, Natalie; Ray, Kathryn J.; Glidden, David V.; Acharya, Nisha R.; Lietman, Thomas M.
2013-01-01
Purpose. Bacterial keratitis is a sight-threatening infection of the cornea that is one of the leading causes of blindness globally. In this report, we analyze the role of moxifloxacin susceptibility in the relationship between causative organisms and clinical outcome in bacteria keratitis. Methods. A mediation analysis is used to assess the role of moxifloxacin susceptibility in the relationship between causative organisms and clinical outcome in bacterial keratitis using data collected in a randomized, controlled trial. Results. In the Steroids for Corneal Ulcers Trial (SCUT), 500 corneal infections were treated with topical moxifloxacin. The outcome of 3-week best spectacle-corrected visual acuity was significantly associated with an organism (Streptococcus pneumoniae, Pseudomonas aeruginosa, etc., P = 0.008). An indirect effects mediation model suggests that MIC accounted for approximately 13% (95% confidence interval, 3%–24%, P = 0.015) of the effect of the organism on 3-week visual acuity. Conclusions. Moxifloxacin mediates the relationship between causative organisms and clinical outcome in bacterial keratitis, and is likely on the causal pathway between the organism and outcome. (ClinicalTrials.gov number, NCT00324168.) PMID:23385795
Global-local visual biases correspond with visual-spatial orientation.
Basso, Michael R; Lowery, Natasha
2004-02-01
Within the past decade, numerous investigations have demonstrated reliable associations of global-local visual processing biases with right and left hemisphere function, respectively (cf. Van Kleeck, 1989). Yet the relevance of these biases to other cognitive functions is not well understood. Towards this end, the present research examined the relationship between global-local visual biases and perception of visual-spatial orientation. Twenty-six women and 23 men completed a global-local judgment task (Kimchi and Palmer, 1982) and the Judgment of Line Orientation Test (JLO; Benton, Sivan, Hamsher, Varney, and Spreen, 1994), a measure of visual-spatial orientation. As expected, men had better performance on JLO. Extending previous findings, global biases were related to better visual-spatial acuity on JLO. The findings suggest that global-local biases and visual-spatial orientation may share underlying cerebral mechanisms. Implications of these findings for other visually mediated cognitive outcomes are discussed.
Task-selective memory effects for successfully implemented encoding strategies.
Leshikar, Eric D; Duarte, Audrey; Hertzog, Christopher
2012-01-01
Previous behavioral evidence suggests that instructed strategy use benefits associative memory formation in paired associate tasks. Two such effective encoding strategies--visual imagery and sentence generation--facilitate memory through the production of different types of mediators (e.g., mental images and sentences). Neuroimaging evidence suggests that regions of the brain support memory reflecting the mental operations engaged at the time of study. That work, however, has not taken into account self-reported encoding task success (i.e., whether participants successfully generated a mediator). It is unknown, therefore, whether task-selective memory effects specific to each strategy might be found when encoding strategies are successfully implemented. In this experiment, participants studied pairs of abstract nouns under either visual imagery or sentence generation encoding instructions. At the time of study, participants reported their success at generating a mediator. Outside of the scanner, participants further reported the quality of the generated mediator (e.g., images, sentences) for each word pair. We observed task-selective memory effects for visual imagery in the left middle occipital gyrus, the left precuneus, and the lingual gyrus. No such task-selective effects were observed for sentence generation. Intriguingly, activity at the time of study in the left precuneus was modulated by the self-reported quality (vividness) of the generated mental images with greater activity for trials given higher ratings of quality. These data suggest that regions of the brain support memory in accord with the encoding operations engaged at the time of study.
Task-Selective Memory Effects for Successfully Implemented Encoding Strategies
Leshikar, Eric D.; Duarte, Audrey; Hertzog, Christopher
2012-01-01
Previous behavioral evidence suggests that instructed strategy use benefits associative memory formation in paired associate tasks. Two such effective encoding strategies–visual imagery and sentence generation–facilitate memory through the production of different types of mediators (e.g., mental images and sentences). Neuroimaging evidence suggests that regions of the brain support memory reflecting the mental operations engaged at the time of study. That work, however, has not taken into account self-reported encoding task success (i.e., whether participants successfully generated a mediator). It is unknown, therefore, whether task-selective memory effects specific to each strategy might be found when encoding strategies are successfully implemented. In this experiment, participants studied pairs of abstract nouns under either visual imagery or sentence generation encoding instructions. At the time of study, participants reported their success at generating a mediator. Outside of the scanner, participants further reported the quality of the generated mediator (e.g., images, sentences) for each word pair. We observed task-selective memory effects for visual imagery in the left middle occipital gyrus, the left precuneus, and the lingual gyrus. No such task-selective effects were observed for sentence generation. Intriguingly, activity at the time of study in the left precuneus was modulated by the self-reported quality (vividness) of the generated mental images with greater activity for trials given higher ratings of quality. These data suggest that regions of the brain support memory in accord with the encoding operations engaged at the time of study. PMID:22693593
Action Planning Mediates Guidance of Visual Attention from Working Memory.
Feldmann-Wüstefeld, Tobias; Schubö, Anna
2015-01-01
Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences.
Action Planning Mediates Guidance of Visual Attention from Working Memory
Schubö, Anna
2015-01-01
Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences. PMID:26171241
Spatiotemporal characteristics of retinal response to network-mediated photovoltaic stimulation.
Ho, Elton; Smith, Richard; Goetz, Georges; Lei, Xin; Galambos, Ludwig; Kamins, Theodore I; Harris, James; Mathieson, Keith; Palanker, Daniel; Sher, Alexander
2018-02-01
Subretinal prostheses aim at restoring sight to patients blinded by photoreceptor degeneration using electrical activation of the surviving inner retinal neurons. Today, such implants deliver visual information with low-frequency stimulation, resulting in discontinuous visual percepts. We measured retinal responses to complex visual stimuli delivered at video rate via a photovoltaic subretinal implant and by visible light. Using a multielectrode array to record from retinal ganglion cells (RGCs) in the healthy and degenerated rat retina ex vivo, we estimated their spatiotemporal properties from the spike-triggered average responses to photovoltaic binary white noise stimulus with 70-μm pixel size at 20-Hz frame rate. The average photovoltaic receptive field size was 194 ± 3 μm (mean ± SE), similar to that of visual responses (221 ± 4 μm), but response latency was significantly shorter with photovoltaic stimulation. Both visual and photovoltaic receptive fields had an opposing center-surround structure. In the healthy retina, ON RGCs had photovoltaic OFF responses, and vice versa. This reversal is consistent with depolarization of photoreceptors by electrical pulses, as opposed to their hyperpolarization under increasing light, although alternative mechanisms cannot be excluded. In degenerate retina, both ON and OFF photovoltaic responses were observed, but in the absence of visual responses, it is not clear what functional RGC types they correspond to. Degenerate retina maintained the antagonistic center-surround organization of receptive fields. These fast and spatially localized network-mediated ON and OFF responses to subretinal stimulation via photovoltaic pixels with local return electrodes raise confidence in the possibility of providing more functional prosthetic vision. NEW & NOTEWORTHY Retinal prostheses currently in clinical use have struggled to deliver visual information at naturalistic frequencies, resulting in discontinuous percepts. We demonstrate modulation of the retinal ganglion cells (RGC) activity using complex spatiotemporal stimuli delivered via subretinal photovoltaic implant at 20 Hz in healthy and in degenerate retina. RGCs exhibit fast and localized ON and OFF network-mediated responses, with antagonistic center-surround organization of their receptive fields.
The Effect of Selected Cinemagraphic Elements on Audience Perception of Mediated Concepts.
ERIC Educational Resources Information Center
Orr, Quinn
This study is to explore cinemagraphic and visual elements and their inter-relations through the reinterpretation of previous research and literature. The cinemagraphic elements of visual images (camera angle, camera motion, subject motion, color, and lighting) work as a language requiring a proper grammar for the messages to be conveyed in their…
ERIC Educational Resources Information Center
Stiller, Klaus D.; Freitag, Annika; Zinnbauer, Peter; Freitag, Christian
2009-01-01
"Present text accompanying pictures aurally to promote learning" is a well established principle of instructional design. But recently, it was shown that under certain conditions visual texts can be preferable. Instructional pacing seems to be one of these conditions that mediate effects. Especially, enabling learners to pace an…
Teaching the Meaning of Words to Children with Visual Impairments
ERIC Educational Resources Information Center
Vervloed, Mathijs P. J.; Loijens, Nancy E. A.; Waller, Sarah E.
2014-01-01
In the report presented here, the authors describe a pilot intervention study that was intended to teach children with visual impairments the meaning of far-away words, and that used their mothers as mediators. The aim was to teach both labels and deep word knowledge, which is the comprehension of the full meaning of words, illustrated through…
ERIC Educational Resources Information Center
Koh, Hwan Cui; Milne, Elizabeth; Dobkins, Karen
2010-01-01
The magnocellular (M) pathway hypothesis proposes that impaired visual motion perception observed in individuals with Autism Spectrum Disorders (ASD) might be mediated by atypical functioning of the subcortical M pathway, as this pathway provides the bulk of visual input to cortical motion detectors. To test this hypothesis, we measured luminance…
A Culture in Transition: Poor Reading and Writing Ability among Children in South African Townships.
ERIC Educational Resources Information Center
Pretorius, E.; Naude, H.
2002-01-01
This study examined factors contributing to poor literacy and numeracy development among black South African children ages 5.5 to 7 years. Findings pointed to a conglomerate of factors, namely inadequate visual-motor integration, poor visual analysis and synthesis, poor fine motor development, and inadequate exposure to mediated reading and…
Cookies for Peace and a Pedagogy of Corporeal Generosity
ERIC Educational Resources Information Center
Springgay, Stephanie
2009-01-01
This article examines visual culture created by secondary students as part of a larger research study that investigates how youth understand and mediate body knowledge. During a six-month period students created a number of visual artworks, using a diversity of material explorations as a means to think through the body as a process of exchange and…
Examining Art and Technology: Determining Why Craft-Making Is Fundamental to Outdoor Education
ERIC Educational Resources Information Center
MacEachren, Zabe
2005-01-01
In this paper, I discuss issues concerning the understanding of the world that pedagogical practices of visual art and technology raise. The intent is to challenge interpretations that experiences of visual art and mediated technology can promote a sense of inseparability between concepts of human and more-than-human awareness. The praxis of…
ERIC Educational Resources Information Center
Wills, Katherine V.
Student/initiates into visual, textual, and theoretical Web-based self-constructions do not necessarily wish to be gender, race, class, or ethnicity invisible or neutral. To best teach writing and critical thinking according to her departmental objectives, one instructor felt she had to broach student's assumptions about themselves as they…
Donald E. Zimmerman; Carol Akerelrea; Jane Kapler Smith; Garrett J. O' Keefe
2006-01-01
Natural-resource managers have used a variety of computer-mediated presentation methods to communicate management practices to diverse publics. We explored the effects of visualizing and animating predictions from mathematical models in computerized presentations explaining forest succession (forest growth and change through time), fire behavior, and management options...
Norris, Rebecca L; Bailey, Rachel L; Bolls, Paul D; Wise, Kevin R
2012-01-01
This experiment explored how the emotional tone and visual complexity of direct-to-consumer (DTC) drug advertisements affect the encoding and storage of specific risk and benefit statements about each of the drugs in question. Results are interpreted under the limited capacity model of motivated mediated message processing framework. Findings suggest that DTC drug ads should be pleasantly toned and high in visual complexity in order to maximize encoding and storage of risk and benefit information.
How the blind "see" Braille: lessons from functional magnetic resonance imaging.
Sadato, Norihiro
2005-12-01
What does the visual cortex of the blind do during Braille reading? This process involves converting simple tactile information into meaningful patterns that have lexical and semantic properties. The perceptual processing of Braille might be mediated by the somatosensory system, whereas visual letter identity is accomplished within the visual system in sighted people. Recent advances in functional neuroimaging techniques, such as functional magnetic resonance imaging, have enabled exploration of the neural substrates of Braille reading. The primary visual cortex of early-onset blind subjects is functionally relevant to Braille reading, suggesting that the brain shows remarkable plasticity that potentially permits the additional processing of tactile information in the visual cortical areas.
Age Mediation of Frontoparietal Activation during Visual Feature Search
Madden, David J.; Parks, Emily L.; Davis, Simon W.; Diaz, Michele T.; Potter, Guy G.; Chou, Ying-hui; Chen, Nan-kuei; Cabeza, Roberto
2014-01-01
Activation of frontal and parietal brain regions is associated with attentional control during visual search. We used fMRI to characterize age-related differences in frontoparietal activation in a highly efficient feature search task, detection of a shape singleton. On half of the trials, a salient distractor (a color singleton) was present in the display. The hypothesis was that frontoparietal activation mediated the relation between age and attentional capture by the salient distractor. Participants were healthy, community-dwelling individuals, 21 younger adults (19 – 29 years of age) and 21 older adults (60 – 87 years of age). Top-down attention, in the form of target predictability, was associated with an improvement in search performance that was comparable for younger and older adults. The increase in search reaction time (RT) associated with the salient distractor (attentional capture), standardized to correct for generalized age-related slowing, was greater for older adults than for younger adults. On trials with a color singleton distractor, search RT increased as a function of increasing activation in frontal regions, for both age groups combined, suggesting increased task difficulty. Mediational analyses disconfirmed the hypothesized model, in which frontal activation mediated the age-related increase in attentional capture, but supported an alternative model in which age was a mediator of the relation between frontal activation and capture. PMID:25102420
Neuronal basis of covert spatial attention in the frontal eye field.
Thompson, Kirk G; Biscoe, Keri L; Sato, Takashi R
2005-10-12
The influential "premotor theory of attention" proposes that developing oculomotor commands mediate covert visual spatial attention. A likely source of this attentional bias is the frontal eye field (FEF), an area of the frontal cortex involved in converting visual information into saccade commands. We investigated the link between FEF activity and covert spatial attention by recording from FEF visual and saccade-related neurons in monkeys performing covert visual search tasks without eye movements. Here we show that the source of attention signals in the FEF is enhanced activity of visually responsive neurons. At the time attention is allocated to the visual search target, nonvisually responsive saccade-related movement neurons are inhibited. Therefore, in the FEF, spatial attention signals are independent of explicit saccade command signals. We propose that spatially selective activity in FEF visually responsive neurons corresponds to the mental spotlight of attention via modulation of ongoing visual processing.
Affordance of Braille Music as a Mediational Means: Significance and Limitations
ERIC Educational Resources Information Center
Park, Hyu-Yong; Kim, Mi-Jung
2014-01-01
Affordance refers to the properties or designs of a thing that offer the function of the thing. This paper discusses the affordance of Braille music in terms of three notions: mediational means, mastery and appropriation, and focuses on answering the following three questions: (i) How do musicians with visual impairments (MVI) perceive Braille…
Learning-dependent plasticity with and without training in the human brain.
Zhang, Jiaxiang; Kourtzi, Zoe
2010-07-27
Long-term experience through development and evolution and shorter-term training in adulthood have both been suggested to contribute to the optimization of visual functions that mediate our ability to interpret complex scenes. However, the brain plasticity mechanisms that mediate the detection of objects in cluttered scenes remain largely unknown. Here, we combine behavioral and functional MRI (fMRI) measurements to investigate the human-brain mechanisms that mediate our ability to learn statistical regularities and detect targets in clutter. We show two different routes to visual learning in clutter with discrete brain plasticity signatures. Specifically, opportunistic learning of regularities typical in natural contours (i.e., collinearity) can occur simply through frequent exposure, generalize across untrained stimulus features, and shape processing in occipitotemporal regions implicated in the representation of global forms. In contrast, learning to integrate discontinuities (i.e., elements orthogonal to contour paths) requires task-specific training (bootstrap-based learning), is stimulus-dependent, and enhances processing in intraparietal regions implicated in attention-gated learning. We propose that long-term experience with statistical regularities may facilitate opportunistic learning of collinear contours, whereas learning to integrate discontinuities entails bootstrap-based training for the detection of contours in clutter. These findings provide insights in understanding how long-term experience and short-term training interact to shape the optimization of visual recognition processes.
Visuomotor Transformations Underlying Hunting Behavior in Zebrafish
Bianco, Isaac H.; Engert, Florian
2015-01-01
Summary Visuomotor circuits filter visual information and determine whether or not to engage downstream motor modules to produce behavioral outputs. However, the circuit mechanisms that mediate and link perception of salient stimuli to execution of an adaptive response are poorly understood. We combined a virtual hunting assay for tethered larval zebrafish with two-photon functional calcium imaging to simultaneously monitor neuronal activity in the optic tectum during naturalistic behavior. Hunting responses showed mixed selectivity for combinations of visual features, specifically stimulus size, speed, and contrast polarity. We identified a subset of tectal neurons with similar highly selective tuning, which show non-linear mixed selectivity for visual features and are likely to mediate the perceptual recognition of prey. By comparing neural dynamics in the optic tectum during response versus non-response trials, we discovered premotor population activity that specifically preceded initiation of hunting behavior and exhibited anatomical localization that correlated with motor variables. In summary, the optic tectum contains non-linear mixed selectivity neurons that are likely to mediate reliable detection of ethologically relevant sensory stimuli. Recruitment of small tectal assemblies appears to link perception to action by providing the premotor commands that release hunting responses. These findings allow us to propose a model circuit for the visuomotor transformations underlying a natural behavior. PMID:25754638
Visuomotor transformations underlying hunting behavior in zebrafish.
Bianco, Isaac H; Engert, Florian
2015-03-30
Visuomotor circuits filter visual information and determine whether or not to engage downstream motor modules to produce behavioral outputs. However, the circuit mechanisms that mediate and link perception of salient stimuli to execution of an adaptive response are poorly understood. We combined a virtual hunting assay for tethered larval zebrafish with two-photon functional calcium imaging to simultaneously monitor neuronal activity in the optic tectum during naturalistic behavior. Hunting responses showed mixed selectivity for combinations of visual features, specifically stimulus size, speed, and contrast polarity. We identified a subset of tectal neurons with similar highly selective tuning, which show non-linear mixed selectivity for visual features and are likely to mediate the perceptual recognition of prey. By comparing neural dynamics in the optic tectum during response versus non-response trials, we discovered premotor population activity that specifically preceded initiation of hunting behavior and exhibited anatomical localization that correlated with motor variables. In summary, the optic tectum contains non-linear mixed selectivity neurons that are likely to mediate reliable detection of ethologically relevant sensory stimuli. Recruitment of small tectal assemblies appears to link perception to action by providing the premotor commands that release hunting responses. These findings allow us to propose a model circuit for the visuomotor transformations underlying a natural behavior. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Isoe, Yasuko; Konagaya, Yumi; Yokoi, Saori; Kubo, Takeo; Takeuchi, Hideaki
2016-06-01
Adult medaka fish (Oryzias latipes) exhibit complex social behaviors that depend mainly on visual cues from conspecifics. The ontogeny of visually-mediated social behaviors from larval/juvenile to adult medaka fish, however, is unknown. In the present study, we established a simple behavioral paradigm to evaluate the swimming proximity to conspecifics based on visual cues in an inter-individual interaction of two medaka fish throughout life. When two fish were placed separately in a cylindrical tank with a concentric transparent wall, the two fish maintained close proximity to each other. A normal fish inside the tank maintained proximity to an optic nerve-cut fish outside of the tank, while the converse was not true. This behavioral paradigm enabled us to quantify visually-induced motivation of a single fish inside the tank. The proximity was detected from larval/juvenile to adult fish. Larval fish, however, maintained close proximity not only to conspecifics, but also to heterospecifics. As the growth stage increased, the degree of proximity to heterospecifics decreased, suggesting that shoaling preferences toward conspecifics and/or visual ability to recognize conspecifics is refined and established according to the growth stage. Furthermore, the proximity of adult female fish was affected by their reproductive status and social familiarity. Only before spawning, adult females maintained closer proximity to familiar males rather than to unfamiliar males, suggesting that proximity was affected by familiarity in a female-specific manner. This simple behavioral paradigm will contribute to our understanding of the neural basis of the development of visually-mediated social behavior using medaka fish.
Cortico-fugal output from visual cortex promotes plasticity of innate motor behaviour.
Liu, Bao-Hua; Huberman, Andrew D; Scanziani, Massimo
2016-10-20
The mammalian visual cortex massively innervates the brainstem, a phylogenetically older structure, via cortico-fugal axonal projections. Many cortico-fugal projections target brainstem nuclei that mediate innate motor behaviours, but the function of these projections remains poorly understood. A prime example of such behaviours is the optokinetic reflex (OKR), an innate eye movement mediated by the brainstem accessory optic system, that stabilizes images on the retina as the animal moves through the environment and is thus crucial for vision. The OKR is plastic, allowing the amplitude of this reflex to be adaptively adjusted relative to other oculomotor reflexes and thereby ensuring image stability throughout life. Although the plasticity of the OKR is thought to involve subcortical structures such as the cerebellum and vestibular nuclei, cortical lesions have suggested that the visual cortex might also be involved. Here we show that projections from the mouse visual cortex to the accessory optic system promote the adaptive plasticity of the OKR. OKR potentiation, a compensatory plastic increase in the amplitude of the OKR in response to vestibular impairment, is diminished by silencing visual cortex. Furthermore, targeted ablation of a sparse population of cortico-fugal neurons that specifically project to the accessory optic system severely impairs OKR potentiation. Finally, OKR potentiation results from an enhanced drive exerted by the visual cortex onto the accessory optic system. Thus, cortico-fugal projections to the brainstem enable the visual cortex, an area that has been principally studied for its sensory processing function, to plastically adapt the execution of innate motor behaviours.
Common and distinct brain networks underlying verbal and visual creativity.
Zhu, Wenfeng; Chen, Qunlin; Xia, Lingxiang; Beaty, Roger E; Yang, Wenjing; Tian, Fang; Sun, Jiangzhou; Cao, Guikang; Zhang, Qinglin; Chen, Xu; Qiu, Jiang
2017-04-01
Creativity is imperative to the progression of human civilization, prosperity, and well-being. Past creative researches tends to emphasize the default mode network (DMN) or the frontoparietal network (FPN) somewhat exclusively. However, little is known about how these networks interact to contribute to creativity and whether common or distinct brain networks are responsible for visual and verbal creativity. Here, we use functional connectivity analysis of resting-state functional magnetic resonance imaging data to investigate visual and verbal creativity-related regions and networks in 282 healthy subjects. We found that functional connectivity within the bilateral superior parietal cortex of the FPN was negatively associated with visual and verbal creativity. The strength of connectivity between the DMN and FPN was positively related to both creative domains. Visual creativity was negatively correlated with functional connectivity within the precuneus of the pDMN and right middle frontal gyrus of the FPN, and verbal creativity was negatively correlated with functional connectivity within the medial prefrontal cortex of the aDMN. Critically, the FPN mediated the relationship between the aDMN and verbal creativity, and it also mediated the relationship between the pDMN and visual creativity. Taken together, decreased within-network connectivity of the FPN and DMN may allow for flexible between-network coupling in the highly creative brain. These findings provide indirect evidence for the cooperative role of the default and executive control networks in creativity, extending past research by revealing common and distinct brain systems underlying verbal and visual creative cognition. Hum Brain Mapp 38:2094-2111, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Retinal ganglion cell maps in the brain: implications for visual processing.
Dhande, Onkar S; Huberman, Andrew D
2014-02-01
Everything the brain knows about the content of the visual world is built from the spiking activity of retinal ganglion cells (RGCs). As the output neurons of the eye, RGCs include ∼20 different subtypes, each responding best to a specific feature in the visual scene. Here we discuss recent advances in identifying where different RGC subtypes route visual information in the brain, including which targets they connect to and how their organization within those targets influences visual processing. We also highlight examples where causal links have been established between specific RGC subtypes, their maps of central connections and defined aspects of light-mediated behavior and we suggest the use of techniques that stand to extend these sorts of analyses to circuits underlying visual perception. Copyright © 2013. Published by Elsevier Ltd.
Cheap or Robust? The practical realization of self-driving wheelchair technology.
Burhanpurkar, Maya; Labbe, Mathieu; Guan, Charlie; Michaud, Francois; Kelly, Jonathan
2017-07-01
To date, self-driving experimental wheelchair technologies have been either inexpensive or robust, but not both. Yet, in order to achieve real-world acceptance, both qualities are fundamentally essential. We present a unique approach to achieve inexpensive and robust autonomous and semi-autonomous assistive navigation for existing fielded wheelchairs, of which there are approximately 5 million units in Canada and United States alone. Our prototype wheelchair platform is capable of localization and mapping, as well as robust obstacle avoidance, using only a commodity RGB-D sensor and wheel odometry. As a specific example of the navigation capabilities, we focus on the single most common navigation problem: the traversal of narrow doorways in arbitrary environments. The software we have developed is generalizable to corridor following, desk docking, and other navigation tasks that are either extremely difficult or impossible for people with upper-body mobility impairments.
Anisotropic encoding of three-dimensional space by place cells and grid cells
Hayman, R.; Verriotis, M.; Jovalekic, A.; Fenton, A.A.; Jeffery, K.J.
2011-01-01
The subjective sense of space may result in part from the combined activity of place cells, in the hippocampus, and grid cells in posterior cortical regions such as entorhinal cortex and pre/parasubiculum. In horizontal planar environments, place cells provide focal positional information while grid cells supply odometric (distance-measuring) information. How these cells operate in three dimensions is unknown, even though the real world is three–dimensional. The present study explored this issue in rats exploring two different kinds of apparatus, a climbing wall (the “pegboard”) and a helix. Place and grid cell firing fields had normal horizontal characteristics but were elongated vertically, with grid fields forming stripes. It appears that grid cell odometry (and by implication path integration) is impaired/absent in the vertical domain, at least when the animal itself remains horizontal. These findings suggest that the mammalian encoding of three-dimensional space is anisotropic. PMID:21822271
Improving Odometric Accuracy for an Autonomous Electric Cart.
Toledo, Jonay; Piñeiro, Jose D; Arnay, Rafael; Acosta, Daniel; Acosta, Leopoldo
2018-01-12
In this paper, a study of the odometric system for the autonomous cart Verdino, which is an electric vehicle based on a golf cart, is presented. A mathematical model of the odometric system is derived from cart movement equations, and is used to compute the vehicle position and orientation. The inputs of the system are the odometry encoders, and the model uses the wheels diameter and distance between wheels as parameters. With this model, a least square minimization is made in order to get the nominal best parameters. This model is updated, including a real time wheel diameter measurement improving the accuracy of the results. A neural network model is used in order to learn the odometric model from data. Tests are made using this neural network in several configurations and the results are compared to the mathematical model, showing that the neural network can outperform the first proposed model.
Robust position estimation of a mobile vehicle
NASA Astrophysics Data System (ADS)
Conan, Vania; Boulanger, Pierre; Elgazzar, Shadia
1994-11-01
The ability to estimate the position of a mobile vehicle is a key task for navigation over large distances in complex indoor environments such as nuclear power plants. Schematics of the plants are available, but they are incomplete, as real settings contain many objects, such as pipes, cables or furniture, that mask part of the model. The position estimation method described in this paper matches 3-D data with a simple schematic of a plant. It is basically independent of odometry information and viewpoint, robust to noisy data and spurious points and largely insensitive to occlusions. The method is based on a hypothesis/verification paradigm and its complexity is polynomial; it runs in (Omicron) (m4n4), where m represents the number of model patches and n the number of scene patches. Heuristics are presented to speed up the algorithm. Results on real 3-D data show good behavior even when the scene is very occluded.
Transient cardio-respiratory responses to visually induced tilt illusions
NASA Technical Reports Server (NTRS)
Wood, S. J.; Ramsdell, C. D.; Mullen, T. J.; Oman, C. M.; Harm, D. L.; Paloski, W. H.
2000-01-01
Although the orthostatic cardio-respiratory response is primarily mediated by the baroreflex, studies have shown that vestibular cues also contribute in both humans and animals. We have demonstrated a visually mediated response to illusory tilt in some human subjects. Blood pressure, heart and respiration rate, and lung volume were monitored in 16 supine human subjects during two types of visual stimulation, and compared with responses to real passive whole body tilt from supine to head 80 degrees upright. Visual tilt stimuli consisted of either a static scene from an overhead mirror or constant velocity scene motion along different body axes generated by an ultra-wide dome projection system. Visual vertical cues were initially aligned with the longitudinal body axis. Subjective tilt and self-motion were reported verbally. Although significant changes in cardio-respiratory parameters to illusory tilts could not be demonstrated for the entire group, several subjects showed significant transient decreases in mean blood pressure resembling their initial response to passive head-up tilt. Changes in pulse pressure and a slight elevation in heart rate were noted. These transient responses are consistent with the hypothesis that visual-vestibular input contributes to the initial cardiovascular adjustment to a change in posture in humans. On average the static scene elicited perceived tilt without rotation. Dome scene pitch and yaw elicited perceived tilt and rotation, and dome roll motion elicited perceived rotation without tilt. A significant correlation between the magnitude of physiological and subjective reports could not be demonstrated.
Longitudinal Analysis of Music Education on Executive Functions in Primary School Children
Jaschke, Artur C.; Honing, Henkjan; Scherder, Erik J. A.
2018-01-01
Background: Research on the effects of music education on cognitive abilities has generated increasing interest across the scientific community. Nonetheless, longitudinal studies investigating the effects of structured music education on cognitive sub-functions are still rare. Prime candidates for investigating a relationship between academic achievement and music education appear to be executive functions such as planning, working memory, and inhibition. Methods: One hundred and forty-seven primary school children, Mage = 6.4 years, SD = 0.65 were followed for 2.5 years. Participants were randomized into four groups: two music intervention groups, one active visual arts group, and a no arts control group. Neuropsychological tests assessed verbal intelligence and executive functions. Additionally, a national pupil monitor provided data on academic performance. Results: Children in the visual arts group perform better on visuospatial memory tasks as compared to the three other conditions. However, the test scores on inhibition, planning and verbal intelligence increased significantly in the two music groups over time as compared to the visual art and no arts controls. Mediation analysis with executive functions and verbal IQ as mediator for academic performance have shown a possible far transfer effect from executive sub-function to academic performance scores. Discussion: The present results indicate a positive influence of long-term music education on cognitive abilities such as inhibition and planning. Of note, following a two-and-a-half year long visual arts program significantly improves scores on a visuospatial memory task. All results combined, this study supports a far transfer effect from music education to academic achievement mediated by executive sub-functions. PMID:29541017
Longitudinal Analysis of Music Education on Executive Functions in Primary School Children.
Jaschke, Artur C; Honing, Henkjan; Scherder, Erik J A
2018-01-01
Background: Research on the effects of music education on cognitive abilities has generated increasing interest across the scientific community. Nonetheless, longitudinal studies investigating the effects of structured music education on cognitive sub-functions are still rare. Prime candidates for investigating a relationship between academic achievement and music education appear to be executive functions such as planning, working memory, and inhibition. Methods: One hundred and forty-seven primary school children, M age = 6.4 years, SD = 0.65 were followed for 2.5 years. Participants were randomized into four groups: two music intervention groups, one active visual arts group, and a no arts control group. Neuropsychological tests assessed verbal intelligence and executive functions. Additionally, a national pupil monitor provided data on academic performance. Results: Children in the visual arts group perform better on visuospatial memory tasks as compared to the three other conditions. However, the test scores on inhibition, planning and verbal intelligence increased significantly in the two music groups over time as compared to the visual art and no arts controls. Mediation analysis with executive functions and verbal IQ as mediator for academic performance have shown a possible far transfer effect from executive sub-function to academic performance scores. Discussion: The present results indicate a positive influence of long-term music education on cognitive abilities such as inhibition and planning. Of note, following a two-and-a-half year long visual arts program significantly improves scores on a visuospatial memory task. All results combined, this study supports a far transfer effect from music education to academic achievement mediated by executive sub-functions.
ERIC Educational Resources Information Center
Andrews, Lucy S.; Watson, Derrick G.; Humphreys, Glyn W.; Braithwaite, Jason J.
2011-01-01
Evidence for inhibitory processes in visual search comes from studies using preview conditions, where responses to new targets are delayed if they carry a featural attribute belonging to the old distractor items that are currently being ignored--the negative carry-over effect (Braithwaite, Humphreys, & Hodsoll, 2003). We examined whether…
ERIC Educational Resources Information Center
Altmann, Gerry T. M.; Kamide, Yuki
2009-01-01
Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either "The woman will put the glass…
ERIC Educational Resources Information Center
Lavidor, Michal; Hayes, Adrian; Shillcock, Richard; Ellis, Andrew W.
2004-01-01
The split fovea theory proposes that visual word recognition of centrally presented words is mediated by the splitting of the foveal image, with letters to the left of fixation being projected to the right hemisphere (RH) and letters to the right of fixation being projected to the left hemisphere (LH). Two lexical decision experiments aimed to…
ERIC Educational Resources Information Center
May, Madeth; George, Sebastien; Prevot, Patrick
2011-01-01
Purpose: This paper presents a part of our research work that places an emphasis on Tracking Data Analysis and Visualization (TrAVis) tools, a web-based system, designed to enhance online tutoring and learning activities, supported by computer-mediated communication (CMC) tools. TrAVis is particularly dedicated to assist both tutors and students…
Learning from Instructional Animations: How Does Prior Knowledge Mediate the Effect of Visual Cues?
ERIC Educational Resources Information Center
Arslan-Ari, I.
2018-01-01
The purpose of this study was to investigate the effects of cueing and prior knowledge on learning and mental effort of students studying an animation with narration. This study employed a 2 (no cueing vs. visual cueing) × 2 (low vs. high prior knowledge) between-subjects factorial design. The results revealed a significant interaction effect…
Interactions between attention, context and learning in primary visual cortex.
Gilbert, C; Ito, M; Kapadia, M; Westheimer, G
2000-01-01
Attention in early visual processing engages the higher order, context dependent properties of neurons. Even at the earliest stages of visual cortical processing neurons play a role in intermediate level vision - contour integration and surface segmentation. The contextual influences mediating this process may be derived from long range connections within primary visual cortex (V1). These influences are subject to perceptual learning, and are strongly modulated by visuospatial attention, which is itself a learning dependent process. The attentional influences may involve interactions between feedback and horizontal connections in V1. V1 is therefore a dynamic and active processor, subject to top-down influences.
Dynamic visualizations as tools for supporting cosmological literacy
NASA Astrophysics Data System (ADS)
Buck, Zoe Elizabeth
My dissertation research is designed to improve access to STEM content through the development of cosmology visualizations that support all learners as they engage in cosmological sense-making. To better understand how to design visualizations that work toward breaking cycles of power and access in the sciences, I orient my work to following "meta-question": How might educators use visualizations to support diverse ways of knowing and learning in order to expand access to cosmology, and to science? In this dissertation, I address this meta-question from a pragmatic epistemological perspective, through a sociocultural lens, following three lines of inquiry: experimental methods (Creswell, 2003) with a focus on basic visualization design, activity analysis (Wells, 1996; Ash, 2001; Rahm, 2012) with a focus on culturally and linguistically diverse learners, and case study (Creswell, 2000) with a focus on expansive learning at a planetarium (Engestrom, 2001; Ash, 2014). My research questions are as follows, each of which corresponds to a self contained course of inquiry with its own design, data, analysis and results: 1) Can mediational cues like color affect the way learners interpret the content in a cosmology visualization? 2) How do cosmology visualizations support cosmological sense-making for diverse students? 3) What are the shared objects of dynamic networks of activity around visualization production and use in a large, urban planetarium and how do they affect learning? The result is a mixed-methods design (Sweetman, Badiee & Creswell, 2010) where both qualitative and quantitative data are used when appropriate to address my research goals. In the introduction I begin by establishing a theoretical framework for understanding visualizations within cultural historical activity theory (CHAT) and situating the chapters that follow within that framework. I also introduce the concept of cosmological literacy, which I define as the set of conceptual, semiotic and cognitive resources required to understand the scientific Universe on a cosmological scale. In the first chapter I use quantitative methods to investigate how 122 postsecondary learners relied on mediational cues like color to interpret dark matter in a cosmology visualization. My results show that color can have a profound effect on the way that audiences interpret a dynamic cosmology visualization, suggesting a closer look at learning activity. Thus in the second chapter I look at how the visualizations are used by small groups of community college students to make sense of cosmology visualizations. I present evidence that when we look past linguistic fluency, visualizations can scaffold cosmological sense-making, which I define as engaging in object-oriented learning activity mediated by concepts and practices associated with cosmological literacy. In the third chapter I present a case study of an urban planetarium trying to define its goals at a time of transition, during and after the development of a visualization-based planetarium show. My analysis reveals several historical contradictions that appear to impel a shift toward affective goals within the institution, and driving the implementation of visualizations, particularly in the context of immersive planetarium shows. I problematize this result by repositioning the shift toward affective goals in the context of equity and diversity. Finally in my conclusion I present broad recommendations for visualization design and implementation based on my findings.
Age Differences in the Differentiation of Trait Impressions From Faces
Ng, Stacey Y.; Zebrowitz, Leslie A.; Franklin, Robert G.
2016-01-01
Objectives. We investigated whether evidence that older adults (OA) show less differentiation of visual stimuli than younger adults (YA) extends to trait impressions from faces and effects of face age. We also examined whether age differences in mood, vision, or cognition-mediated differentiation differences. Finally, we investigated whether age differences in trait differentiation mediated differences in impression positivity. Method. We used a differentiation index adapted from previous work on stereotyping to assess OA and YA likelihood of assigning different faces to different levels on trait scales. We computed scores for ratings of older and younger faces’ competence, health, hostility, and untrustworthiness. Results. OA showed less differentiated trait ratings than YA. Measures of mood, vision, and cognition did not mediate these rater age differences. Hostility was differentiated more for younger than older faces, while health was differentiated more for older faces, but only by OA. Age differences in differentiation mediated age differences in impression positivity. Discussion. Less differentiation of trait impressions from faces in OA is consistent with previous evidence for less differentiation in face and emotion recognition. Results indicated that that age-related dedifferentiation does not reflect narrow changes in visual function. They also provide a novel explanation for OA positivity effects. PMID:25194140
NASA Tech Briefs, October 2012
NASA Technical Reports Server (NTRS)
2012-01-01
Topics discussed include: Detection of Chemical Precursors of Explosives; Detecting Methane From Leaking Pipelines and as Greenhouse Gas in the Atmosphere; Onboard Sensor Data Qualification in Human-Rated Launch Vehicles; Rugged, Portable, Real-Time Optical Gaseous Analyzer for Hydrogen Fluoride; A Probabilistic Mass Estimation Algorithm for a Novel 7-Channel Capacitive Sample Verification Sensor; Low-Power Architecture for an Optical Life Gas Analyzer; Online Cable Tester and Rerouter; A Three-Frequency Feed for Millimeter-Wave Radiometry; Capacitance Probe Resonator for Multichannel Electrometer; Inverted Three-Junction Tandem Thermophotovoltaic Modules; Fabrication of Single Crystal MgO Capsules; Inflatable Hangar for Assembly of Large Structures in Space; Mars Aqueous Processing System; Hybrid Filter Membrane; Design for the Structure and the Mechanics of Moballs; Pressure Dome for High-Pressure Electrolyzer; Cascading Tesla Oscillating Flow Diode for Stirling Engine Gas Bearings; Compact, Low-Force, Low-Noise Linear Actuator; Ultra-Compact Motor Controller; Extreme Ionizing-Radiation-Resistant Bacterium; Wideband Single-Crystal Transducer for Bone Characterization; Fluorescence-Activated Cell Sorting of Live Versus Dead Bacterial Cells and Spores; Nonhazardous Urine Pretreatment Method; Laser-Ranging Transponders for Science Investigations of the Moon and Mars; Ka-Band Waveguide Three-Way Serial Combiner for MMIC Amplifiers; Structural Health Monitoring with Fiber Bragg Grating and Piezo Arrays; Low-Gain Circularly Polarized Antenna with Torus-Shaped Pattern; Stereo and IMU- Assisted Visual Odometry for Small Robots; Global Swath and Gridded Data Tiling; GOES-R: Satellite Insight; Aquarius iPhone Application; Monitoring of International Space Station Telemetry Using Shewhart Control Charts; Theory of a Traveling Wave Feed for a Planar Slot Array Antenna; Time Manager Software for a Flight Processor; Simulation of Oxygen Disintegration and Mixing With Hydrogen or Helium at Supercritical Pressure; A Superfluid Pulse Tube Refrigerator Without Moving Parts for Sub-Kelvin Cooling; Sapphire Viewports for a Venus Probe; The Mobile Chamber; Electric Propulsion Induced Secondary Mass Spectroscopy; and Radiation-Tolerant DC-DC Converters.
NASA Tech Briefs, December 2007
NASA Technical Reports Server (NTRS)
2007-01-01
Topics include: Ka-Band TWT High-Efficiency Power Combiner for High-Rate Data Transmission; Reusable, Extensible High-Level Data-Distribution Concept; Processing Satellite Imagery To Detect Waste Tire Piles; Monitoring by Use of Clusters of Sensor-Data Vectors; Circuit and Method for Communication Over DC Power Line; Switched Band-Pass Filters for Adaptive Transceivers; Noncoherent DTTLs for Symbol Synchronization; High-Voltage Power Supply With Fast Rise and Fall Times; Waveguide Calibrator for Multi-Element Probe Calibration; Four-Way Ka-Band Power Combiner; Loss-of-Control-Inhibitor Systems for Aircraft; Improved Underwater Excitation-Emission Matrix Fluorometer; Metrology Camera System Using Two-Color Interferometry; Design and Fabrication of High-Efficiency CMOS/CCD Imagers; Foam Core Shielding for Spacecraft CHEM-Based Self-Deploying Planetary Storage Tanks Sequestration of Single-Walled Carbon Nanotubes in a Polymer PPC750 Performance Monitor Application-Program-Installer Builder Using Visual Odometry to Estimate Position and Attitude Design and Data Management System Simple, Script-Based Science Processing Archive Automated Rocket Propulsion Test Management Online Remote Sensing Interface Fusing Image Data for Calculating Position of an Object Implementation of a Point Algorithm for Real-Time Convex Optimization Handling Input and Output for COAMPS Modeling and Grid Generation of Iced Airfoils Automated Identification of Nucleotide Sequences Balloon Design Software Rocket Science 101 Interactive Educational Program Creep Forming of Carbon-Reinforced Ceramic-Matrix Composites Dog-Bone Horns for Piezoelectric Ultrasonic/Sonic Actuators Benchtop Detection of Proteins Recombinant Collagenlike Proteins Remote Sensing of Parasitic Nematodes in Plants Direct Coupling From WGM Resonator Disks to Photodetectors Using Digital Radiography To Image Liquid Nitrogen in Voids Multiple-Parameter, Low-False-Alarm Fire-Detection Systems Mosaic-Detector-Based Fluorescence Spectral Imager Plasmoid Thruster for High Specific-Impulse Propulsion Analysis Method for Quantifying Vehicle Design Goals Improved Tracking of Targets by Cameras on a Mars Rover Sample Caching Subsystem Multistage Passive Cooler for Spaceborne Instruments GVIPS Models and Software Stowable Energy-Absorbing Rocker-Bogie Suspensions
Laser- and Multi-Spectral Monitoring of Natural Objects from UAVs
NASA Astrophysics Data System (ADS)
Reiterer, Alexander; Frey, Simon; Koch, Barbara; Stemmler, Simon; Weinacker, Holger; Hoffmann, Annemarie; Weiler, Markus; Hergarten, Stefan
2016-04-01
The paper describes the research, development and evaluation of a lightweight sensor system for UAVs. The system is composed of three main components: (1) a laser scanning module, (2) a multi-spectral camera system, and (3) a processing/storage unit. All three components are newly developed. Beside measurement precision and frequency, the low weight has been one of the challenging tasks. The current system has a total weight of about 2.5 kg and is designed as a self-contained unit (incl. storage and battery units). The main features of the system are: laser-based multi-echo 3D measurement by a wavelength of 905 nm (totally eye save), measurement range up to 200 m, measurement frequency of 40 kHz, scanning frequency of 16 Hz, relative distance accuracy of 10 mm. The system is equipped with both GNSS and IMU. Alternatively, a multi-visual-odometry system has been integrated to estimate the trajectory of the UAV by image features (based on this system a calculation of 3D-coordinates without GNSS is possible). The integrated multi-spectral camera system is based on conventional CMOS-image-chips equipped with a special sets of band-pass interference filters with a full width half maximum (FWHM) of 50 nm. Good results for calculating the normalized difference vegetation index (NDVI) and the wide dynamic range vegetation index (WDRVI) have been achieved using the band-pass interference filter-set with a FWHM of 50 nm and an exposure times between 5.000 μs and 7.000 μs. The system is currently used for monitoring of natural objects and surfaces, like forest, as well as for geo-risk analysis (landslides). By measuring 3D-geometric and multi-spectral information a reliable monitoring and interpretation of the data-set is possible. The paper gives an overview about the development steps, the system, the evaluation and first results.
Madden, David J.
2007-01-01
Older adults are often slower and less accurate than are younger adults in performing visual-search tasks, suggesting an age-related decline in attentional functioning. Age-related decline in attention, however, is not entirely pervasive. Visual search that is based on the observer’s expectations (i.e., top-down attention) is relatively preserved as a function of adult age. Neuroimaging research suggests that age-related decline occurs in the structure and function of brain regions mediating the visual sensory input, whereas activation of regions in the frontal and parietal lobes is often greater for older adults than for younger adults. This increased activation may represent an age-related increase in the role of top-down attention during visual tasks. To obtain a more complete account of age-related decline and preservation of visual attention, current research is beginning to explore the relation of neuroimaging measures of brain structure and function to behavioral measures of visual attention. PMID:18080001
Predicting Visual Consciousness Electrophysiologically from Intermittent Binocular Rivalry
O’Shea, Robert P.; Kornmeier, Jürgen; Roeber, Urte
2013-01-01
Purpose We sought brain activity that predicts visual consciousness. Methods We used electroencephalography (EEG) to measure brain activity to a 1000-ms display of sine-wave gratings, oriented vertically in one eye and horizontally in the other. This display yields binocular rivalry: irregular alternations in visual consciousness between the images viewed by the eyes. We replaced both gratings with 200 ms of darkness, the gap, before showing a second display of the same rival gratings for another 1000 ms. We followed this by a 1000-ms mask then a 2000-ms inter-trial interval (ITI). Eleven participants pressed keys after the second display in numerous trials to say whether the orientation of the visible grating changed from before to after the gap or not. Each participant also responded to numerous non-rivalry trials in which the gratings had identical orientations for the two eyes and for which the orientation of both either changed physically after the gap or did not. Results We found that greater activity from lateral occipital-parietal-temporal areas about 180 ms after initial onset of rival stimuli predicted a change in visual consciousness more than 1000 ms later, on re-presentation of the rival stimuli. We also found that less activity from parietal, central, and frontal electrodes about 400 ms after initial onset of rival stimuli predicted a change in visual consciousness about 800 ms later, on re-presentation of the rival stimuli. There was no such predictive activity when the change in visual consciousness occurred because the stimuli changed physically. Conclusion We found early EEG activity that predicted later visual consciousness. Predictive activity 180 ms after onset of the first display may reflect adaption of the neurons mediating visual consciousness in our displays. Predictive activity 400 ms after onset of the first display may reflect a less-reliable brain state mediating visual consciousness. PMID:24124536
ERIC Educational Resources Information Center
Friedman, Arielle
2016-01-01
The study examines two years of an educational program for children aged three to four, based on the use of digital cameras. It assesses the program's effects on the children and adults involved in the project, and explores how they help the youngsters acquire visual literacy. Operating under the assumption that formal curricula usually…
ERIC Educational Resources Information Center
Ernst, Hardy; McGahan, William T.; Harrison, John
2015-01-01
This paper reports on attempts to incorporate creative visual literacy, by way of student owned technology, and sharing of student-generated multimedia amongst peers to enhance learning in a first year human physiology course. In 2013, students were set the task of producing an animated video, which outlined the pathogenesis of a chosen disease.…
Webb, Christina E.; Turney, Indira C.; Dennis, Nancy A.
2017-01-01
The current study used a novel scene paradigm to investigate the role of encoding schemas on memory. Specifically, the study examined the influence of a strong encoding schema on retrieval of both schematic and non-schematic information, as well as false memories for information associated with the schema. Additionally, the separate roles of recollection and familiarity in both veridical and false memory retrieval were examined. The study identified several novel results. First, while many common neural regions mediated both schematic and non-schematic retrieval success, schematic recollection exhibited greater activation in visual cortex and hippocampus, regions commonly shown to mediate detailed retrieval. More effortful cognitive control regions in the prefrontal and parietal cortices, on the other hand, supported non-schematic recollection, while lateral temporal cortices supported familiarity-based retrieval of non-schematic items. Second, both true and false recollection, as well as familiarity, were mediated by activity in left middle temporal gyrus, a region associated with semantic processing and retrieval of schematic gist. Moreover, activity in this region was greater for both false recollection and false familiarity, suggesting a greater reliance on lateral temporal cortices for retrieval of illusory memories, irrespective of memory strength. Consistent with previous false memory studies, visual cortex showed increased activity for true compared to false recollection, suggesting that visual cortices are critical for distinguishing between previously viewed targets and related lures at retrieval. Additionally, the absence of common visual activity between true and false retrieval suggests that, unlike previous studies utilizing visual stimuli, when false memories are predicated on schematic gist and not perceptual overlap, there is little reliance on visual processes during false memory retrieval. Finally, the medial temporal lobe exhibited an interesting dissociation, showing greater activity for true compared to false recollection, as well as for false compared to true familiarity. These results provided an indication as to how different types of items are retrieved when studied within a highly schematic context. Results both replicate and extend previous true and false memory findings, supporting the Fuzzy Trace Theory. PMID:27697593
Webb, Christina E; Turney, Indira C; Dennis, Nancy A
2016-12-01
The current study used a novel scene paradigm to investigate the role of encoding schemas on memory. Specifically, the study examined the influence of a strong encoding schema on retrieval of both schematic and non-schematic information, as well as false memories for information associated with the schema. Additionally, the separate roles of recollection and familiarity in both veridical and false memory retrieval were examined. The study identified several novel results. First, while many common neural regions mediated both schematic and non-schematic retrieval success, schematic recollection exhibited greater activation in visual cortex and hippocampus, regions commonly shown to mediate detailed retrieval. More effortful cognitive control regions in the prefrontal and parietal cortices, on the other hand, supported non-schematic recollection, while lateral temporal cortices supported familiarity-based retrieval of non-schematic items. Second, both true and false recollection, as well as familiarity, were mediated by activity in left middle temporal gyrus, a region associated with semantic processing and retrieval of schematic gist. Moreover, activity in this region was greater for both false recollection and false familiarity, suggesting a greater reliance on lateral temporal cortices for retrieval of illusory memories, irrespective of memory strength. Consistent with previous false memory studies, visual cortex showed increased activity for true compared to false recollection, suggesting that visual cortices are critical for distinguishing between previously viewed targets and related lures at retrieval. Additionally, the absence of common visual activity between true and false retrieval suggests that, unlike previous studies utilizing visual stimuli, when false memories are predicated on schematic gist and not perceptual overlap, there is little reliance on visual processes during false memory retrieval. Finally, the medial temporal lobe exhibited an interesting dissociation, showing greater activity for true compared to false recollection, as well as for false compared to true familiarity. These results provided an indication as to how different types of items are retrieved when studied within a highly schematic context. Results both replicate and extend previous true and false memory findings, supporting the Fuzzy Trace Theory. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ronchi, Roberta; Bello-Ruiz, Javier; Lukowska, Marta; Herbelin, Bruno; Cabrilo, Ivan; Schaller, Karl; Blanke, Olaf
2015-04-01
Recent evidence suggests that multisensory integration of bodily signals involving exteroceptive and interoceptive information modulates bodily aspects of self-consciousness such as self-identification and self-location. In the so-called Full Body Illusion subjects watch a virtual body being stroked while they perceive tactile stimulation on their own body inducing illusory self-identification with the virtual body and a change in self-location towards the virtual body. In a related illusion, it has recently been shown that similar changes in self-identification and self-location can be observed when an interoceptive signal is used in association with visual stimulation of the virtual body (i.e., participants observe a virtual body illuminated in synchrony with their heartbeat). Although brain imaging and neuropsychological evidence suggest that the insular cortex is a core region for interoceptive processing (such as cardiac perception and awareness) as well as for self-consciousness, it is currently not known whether the insula mediates cardio-visual modulation of self-consciousness. Here we tested the involvement of insular cortex in heartbeat awareness and cardio-visual manipulation of bodily self-consciousness in a patient before and after resection of a selective right neoplastic insular lesion. Cardio-visual stimulation induced an abnormally enhanced state of bodily self-consciousness; in addition, cardio-visual manipulation was associated with an experienced loss of the spatial unity of the self (illusory bi-location and duplication of his body), not observed in healthy subjects. Heartbeat awareness was found to decrease after insular resection. Based on these data we propose that the insula mediates interoceptive awareness as well as cardio-visual effects on bodily self-consciousness and that insular processing of interoceptive signals is an important mechanism for the experienced unity of the self. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cortico-fugal output from visual cortex promotes plasticity of innate motor behaviour
Liu, Bao-hua; Huberman, Andrew D.; Scanziani, Massimo
2017-01-01
The mammalian visual cortex massively innervates the brainstem, a phylogenetically older structure, via cortico-fugal axonal projections1. Many cortico-fugal projections target brainstem nuclei that mediate innate motor behaviours, but the function of these projections remains poorly understood1–4. A prime example of such behaviours is the optokinetic reflex (OKR), an innate eye movement mediated by the brainstem accessory optic system3,5,6, that stabilizes images on the retina as the animal moves through the environment and is thus crucial for vision5. The OKR is plastic, allowing the amplitude of this reflex to be adaptively adjusted relative to other oculomotor reflexes and thereby ensuring image stability throughout life7–11. Although the plasticity of the OKR is thought to involve subcortical structures such as the cerebellum and vestibular nuclei10–13, cortical lesions have suggested that the visual cortex might also be involved9,14,15. Here we show that projections from the mouse visual cortex to the accessory optic system promote the adaptive plasticity of the OKR. OKR potentiation, a compensatory plastic increase in the amplitude of the OKR in response to vestibular impairment11,16–18, is diminished by silencing visual cortex. Furthermore, targeted ablation of a sparse population of cortico-fugal neurons that specifically project to the accessory optic system severely impairs OKR potentiation. Finally, OKR potentiation results from an enhanced drive exerted by the visual cortex onto the accessory optic system. Thus, cortico-fugal projections to the brainstem enable the visual cortex, an area that has been principally studied for its sensory processing function19, to plastically adapt the execution of innate motor behaviours. PMID:27732573
The effect of sildenafil citrate (Viagra) on visual sensitivity.
Stockman, Andrew; Sharpe, Lindsay T; Tufail, Adnan; Kell, Philip D; Ripamonti, Caterina; Jeffery, Glen
2007-06-08
The erectile dysfunction medicine sildenafil citrate (Viagra) inhibits phosphodiesterase type 6 (PDE6), an essential enzyme involved in the activation and modulation of the phototransduction cascade. Although Viagra might thus be expected to impair visual performance, reports of deficits following its ingestion have so far been largely inconclusive or anecdotal. Here, we adopt tests sensitive to the slowing of the visual response likely to result from the inhibition of PDE6. We measured temporal acuity (critical fusion frequency) and modulation sensitivity in four subjects before and after the ingestion of a 100-mg dose of Viagra under conditions chosen to isolate the responses of either their short-wavelength-sensitive (S-) cone photoreceptors or their long- and middle-wavelength-sensitive (L- and M-) cones. When vision was mediated by S-cones, all subjects exhibited some statistically significant losses in sensitivity, which varied from mild to moderate. The two individuals who showed the largest S-cone sensitivity losses also showed comparable losses when their vision was mediated by the L- and M-cones. Some of the losses appear to increase with frequency, which is broadly consistent with Viagra interfering with the ability of PDE6 to shorten the time over which the visual system integrates signals as the light level increases. However, others appear to represent a roughly frequency-independent attenuation of the visual signal, which might also be consistent with Viagra lengthening the integration time (because it has the effect of increasing the effectiveness of steady background lights), but such changes are also open to other interpretations. Even for the more affected observers, however, Viagra is unlikely to impair common visual tasks, except under conditions of reduced visibility when objects are already near visual threshold.
Härer, Andreas; Torres-Dowdall, Julián; Meyer, Axel
2017-10-01
Colonization of novel habitats is typically challenging to organisms. In the initial stage after colonization, approximation to fitness optima in the new environment can occur by selection acting on standing genetic variation, modification of developmental patterns or phenotypic plasticity. Midas cichlids have recently colonized crater Lake Apoyo from great Lake Nicaragua. The photic environment of crater Lake Apoyo is shifted towards shorter wavelengths compared to great Lake Nicaragua and Midas cichlids from both lakes differ in visual sensitivity. We investigated the contribution of ontogeny and phenotypic plasticity in shaping the visual system of Midas cichlids after colonizing this novel photic environment. To this end, we measured cone opsin expression both during development and after experimental exposure to different light treatments. Midas cichlids from both lakes undergo ontogenetic changes in cone opsin expression, but visual sensitivity is consistently shifted towards shorter wavelengths in crater lake fish, which leads to a paedomorphic retention of their visual phenotype. This shift might be mediated by lower levels of thyroid hormone in crater lake Midas cichlids (measured indirectly as dio2 and dio3 gene expression). Exposing fish to different light treatments revealed that cone opsin expression is phenotypically plastic in both species during early development, with short and long wavelength light slowing or accelerating ontogenetic changes, respectively. Notably, this plastic response was maintained into adulthood only in the derived crater lake Midas cichlids. We conclude that the rapid evolution of Midas cichlids' visual system after colonizing crater Lake Apoyo was mediated by a shift in visual sensitivity during ontogeny and was further aided by phenotypic plasticity during development. © 2017 John Wiley & Sons Ltd.
Age mediation of frontoparietal activation during visual feature search.
Madden, David J; Parks, Emily L; Davis, Simon W; Diaz, Michele T; Potter, Guy G; Chou, Ying-hui; Chen, Nan-kuei; Cabeza, Roberto
2014-11-15
Activation of frontal and parietal brain regions is associated with attentional control during visual search. We used fMRI to characterize age-related differences in frontoparietal activation in a highly efficient feature search task, detection of a shape singleton. On half of the trials, a salient distractor (a color singleton) was present in the display. The hypothesis was that frontoparietal activation mediated the relation between age and attentional capture by the salient distractor. Participants were healthy, community-dwelling individuals, 21 younger adults (19-29 years of age) and 21 older adults (60-87 years of age). Top-down attention, in the form of target predictability, was associated with an improvement in search performance that was comparable for younger and older adults. The increase in search reaction time (RT) associated with the salient distractor (attentional capture), standardized to correct for generalized age-related slowing, was greater for older adults than for younger adults. On trials with a color singleton distractor, search RT increased as a function of increasing activation in frontal regions, for both age groups combined, suggesting increased task difficulty. Mediational analyses disconfirmed the hypothesized model, in which frontal activation mediated the age-related increase in attentional capture, but supported an alternative model in which age was a mediator of the relation between frontal activation and capture. Copyright © 2014 Elsevier Inc. All rights reserved.
Edwards, Jerri D.; Ruva, Christine L.; O’Brien, Jennifer L.; Haley, Christine B.; Lister, Jennifer J.
2013-01-01
The purpose of these analyses was to examine mediators of the transfer of cognitive speed of processing training to improved everyday functional performance (Edwards, Wadley, Vance, Roenker, & Ball, 2005). Cognitive speed of processing and visual attention (as measured by the Useful Field of View Test; UFOV) were examined as mediators of training transfer. Secondary data analyses were conducted from the Staying Keen in Later Life (SKILL) study, a randomized cohort study including 126 community dwelling adults 63 to 87 years of age. In the SKILL study, participants were randomized to an active control group or cognitive speed of processing training (SOPT), a non-verbal, computerized intervention involving perceptual practice of visual tasks. Prior analyses found significant effects of training as measured by the UFOV and Timed Instrumental Activities of Daily Living (TIADL) Tests. Results from the present analyses indicate that speed of processing for a divided attention task significantly mediated the effect of SOPT on everyday performance (e.g., TIADL) in a multiple mediation model accounting for 91% of the variance. These findings suggest that everyday functional improvements found from SOPT are directly attributable to improved UFOV performance, speed of processing for divided attention in particular. Targeting divided attention in cognitive interventions may be important to positively affect everyday functioning among older adults. PMID:23066808
Synaptic Mechanisms Generating Orientation Selectivity in the ON Pathway of the Rabbit Retina
Venkataramani, Sowmya
2016-01-01
Neurons that signal the orientation of edges within the visual field have been widely studied in primary visual cortex. Much less is known about the mechanisms of orientation selectivity that arise earlier in the visual stream. Here we examine the synaptic and morphological properties of a subtype of orientation-selective ganglion cell in the rabbit retina. The receptive field has an excitatory ON center, flanked by excitatory OFF regions, a structure similar to simple cell receptive fields in primary visual cortex. Examination of the light-evoked postsynaptic currents in these ON-type orientation-selective ganglion cells (ON-OSGCs) reveals that synaptic input is mediated almost exclusively through the ON pathway. Orientation selectivity is generated by larger excitation for preferred relative to orthogonal stimuli, and conversely larger inhibition for orthogonal relative to preferred stimuli. Excitatory orientation selectivity arises in part from the morphology of the dendritic arbors. Blocking GABAA receptors reduces orientation selectivity of the inhibitory synaptic inputs and the spiking responses. Negative contrast stimuli in the flanking regions produce orientation-selective excitation in part by disinhibition of a tonic NMDA receptor-mediated input arising from ON bipolar cells. Comparison with earlier studies of OFF-type OSGCs indicates that diverse synaptic circuits have evolved in the retina to detect the orientation of edges in the visual input. SIGNIFICANCE STATEMENT A core goal for visual neuroscientists is to understand how neural circuits at each stage of the visual system extract and encode features from the visual scene. This study documents a novel type of orientation-selective ganglion cell in the retina and shows that the receptive field structure is remarkably similar to that of simple cells in primary visual cortex. However, the data indicate that, unlike in the cortex, orientation selectivity in the retina depends on the activity of inhibitory interneurons. The results further reveal the physiological basis for feature detection in the visual system, elucidate the synaptic mechanisms that generate orientation selectivity at an early stage of visual processing, and illustrate a novel role for NMDA receptors in retinal processing. PMID:26985041
Synaptic Mechanisms Generating Orientation Selectivity in the ON Pathway of the Rabbit Retina.
Venkataramani, Sowmya; Taylor, W Rowland
2016-03-16
Neurons that signal the orientation of edges within the visual field have been widely studied in primary visual cortex. Much less is known about the mechanisms of orientation selectivity that arise earlier in the visual stream. Here we examine the synaptic and morphological properties of a subtype of orientation-selective ganglion cell in the rabbit retina. The receptive field has an excitatory ON center, flanked by excitatory OFF regions, a structure similar to simple cell receptive fields in primary visual cortex. Examination of the light-evoked postsynaptic currents in these ON-type orientation-selective ganglion cells (ON-OSGCs) reveals that synaptic input is mediated almost exclusively through the ON pathway. Orientation selectivity is generated by larger excitation for preferred relative to orthogonal stimuli, and conversely larger inhibition for orthogonal relative to preferred stimuli. Excitatory orientation selectivity arises in part from the morphology of the dendritic arbors. Blocking GABAA receptors reduces orientation selectivity of the inhibitory synaptic inputs and the spiking responses. Negative contrast stimuli in the flanking regions produce orientation-selective excitation in part by disinhibition of a tonic NMDA receptor-mediated input arising from ON bipolar cells. Comparison with earlier studies of OFF-type OSGCs indicates that diverse synaptic circuits have evolved in the retina to detect the orientation of edges in the visual input. A core goal for visual neuroscientists is to understand how neural circuits at each stage of the visual system extract and encode features from the visual scene. This study documents a novel type of orientation-selective ganglion cell in the retina and shows that the receptive field structure is remarkably similar to that of simple cells in primary visual cortex. However, the data indicate that, unlike in the cortex, orientation selectivity in the retina depends on the activity of inhibitory interneurons. The results further reveal the physiological basis for feature detection in the visual system, elucidate the synaptic mechanisms that generate orientation selectivity at an early stage of visual processing, and illustrate a novel role for NMDA receptors in retinal processing. Copyright © 2016 the authors 0270-6474/16/363336-14$15.00/0.
Thomas, Alyssa R; Lacadie, Cheryl; Vohr, Betty; Ment, Laura R; Scheinost, Dustin
2017-01-01
Adolescents born preterm (PT) with no evidence of neonatal brain injury are at risk of deficits in visual memory and fine motor skills that diminish academic performance. The association between these deficits and white matter microstructure is relatively unexplored. We studied 190 PTs with no brain injury and 92 term controls at age 16 years. The Rey-Osterrieth Complex Figure Test (ROCF), the Beery visual-motor integration (VMI), and the Grooved Pegboard Test (GPT) were collected for all participants, while a subset (40 PTs and 40 terms) underwent diffusion-weighted magnetic resonance imaging. PTs performed more poorly than terms on ROCF, VMI, and GPT (all P < 0.01). Mediation analysis showed fine motor skill (GPT score) significantly mediates group difference in ROCF and VMI (all P < 0.001). PTs showed a negative correlation (P < 0.05, corrected) between fractional anisotropy (FA) in the bilateral middle cerebellar peduncles and GPT score, with higher FA correlating to lower (faster task completion) GPT scores, and between FA in the right superior cerebellar peduncle and ROCF scores. PTs also had a positive correlation (P < 0.05, corrected) between VMI and left middle cerebellar peduncle FA. Novel strategies to target fine motor skills and the cerebellum may help PTs reach their full academic potential. © The Author 2017. Published by Oxford University Press.
Effects of mediated social touch on affective experiences and trust.
Erk, Stefanie M; Toet, Alexander; Van Erp, Jan B F
2015-01-01
This study investigated whether communication via mediated hand pressure during a remotely shared experience (watching an amusing video) can (1) enhance recovery from sadness, (2) enhance the affective quality of the experience, and (3) increase trust towards the communication partner. Thereto participants first watched a sad movie clip to elicit sadness, followed by a funny one to stimulate recovery from sadness. While watching the funny clip they signaled a hypothetical fellow participant every time they felt amused. In the experimental condition the participants responded by pressing a hand-held two-way mediated touch device (a Frebble), which also provided haptic feedback via simulated hand squeezes. In the control condition they responded by pressing a button and they received abstract visual feedback. Objective (heart rate, galvanic skin conductance, number and duration of joystick or Frebble presses) and subjective (questionnaires) data were collected to assess the emotional reactions of the participants. The subjective measurements confirmed that the sad movie successfully induced sadness while the funny movie indeed evoked more positive feelings. Although their ranking agreed with the subjective measurements, the physiological measurements confirmed this conclusion only for the funny movie. The results show that recovery from movie induced sadness, the affective experience of the amusing movie, and trust towards the communication partner did not differ between both experimental conditions. Hence, feedback via mediated hand touching did not enhance either of these factors compared to visual feedback. Further analysis of the data showed that participants scoring low on Extraversion (i.e., persons that are more introvert) or low on Touch Receptivity (i.e., persons who do not like to be touched by others) felt better understood by their communication partner when receiving mediated touch feedback instead of visual feedback, while the opposite was found for participants scoring high on these factors. The implications of these results for further research are discussed, and some suggestions for follow-up experiments are presented.
Effects of mediated social touch on affective experiences and trust
Erk, Stefanie M.; Van Erp, Jan B.F.
2015-01-01
This study investigated whether communication via mediated hand pressure during a remotely shared experience (watching an amusing video) can (1) enhance recovery from sadness, (2) enhance the affective quality of the experience, and (3) increase trust towards the communication partner. Thereto participants first watched a sad movie clip to elicit sadness, followed by a funny one to stimulate recovery from sadness. While watching the funny clip they signaled a hypothetical fellow participant every time they felt amused. In the experimental condition the participants responded by pressing a hand-held two-way mediated touch device (a Frebble), which also provided haptic feedback via simulated hand squeezes. In the control condition they responded by pressing a button and they received abstract visual feedback. Objective (heart rate, galvanic skin conductance, number and duration of joystick or Frebble presses) and subjective (questionnaires) data were collected to assess the emotional reactions of the participants. The subjective measurements confirmed that the sad movie successfully induced sadness while the funny movie indeed evoked more positive feelings. Although their ranking agreed with the subjective measurements, the physiological measurements confirmed this conclusion only for the funny movie. The results show that recovery from movie induced sadness, the affective experience of the amusing movie, and trust towards the communication partner did not differ between both experimental conditions. Hence, feedback via mediated hand touching did not enhance either of these factors compared to visual feedback. Further analysis of the data showed that participants scoring low on Extraversion (i.e., persons that are more introvert) or low on Touch Receptivity (i.e., persons who do not like to be touched by others) felt better understood by their communication partner when receiving mediated touch feedback instead of visual feedback, while the opposite was found for participants scoring high on these factors. The implications of these results for further research are discussed, and some suggestions for follow-up experiments are presented. PMID:26557429
Nicklas Samils; Malin Elfstrand; Daniel L. Lindner Czederpiltz; Jan Fahleson; Ake Olson; Christina Dixelius; Jan Stenlid
2006-01-01
Heterobasidion annosum causes root and butt-rot in trees and is the most serious forest pathogen in the northern hemisphere. We developed a rapid and simple Agrobacterium-mediated method of gene delivery into H. annosum to be used in functional studies of candidate genes and for visualization of mycelial interactions. Heterobasidion annosum TC 32-1 was cocultivated at...
Contrast adaptation in cat visual cortex is not mediated by GABA.
DeBruyn, E J; Bonds, A B
1986-09-24
The possible involvement of gamma-aminobutyric acid (GABA) in contrast adaptation in single cells in area 17 of the cat was investigated. Iontophoretic application of N-methyl bicuculline increased cell responses, but had no effect on the magnitude of adaptation. These results suggest that contrast adaptation is the result of inhibition through a parallel pathway, but that GABA does not mediate this process.
Mertz, J R; Wallman, J
2000-04-01
Research over the past two decades has shown that the growth of young eyes is guided by vision. If near- or far-sightedness is artificially imposed by spectacle lenses, eyes of primates and chicks compensate by changing their rate of elongation, thereby growing back to the pre-lens optical condition. Little is known about what chemical signals might mediate between visual effects on the retina and alterations of eye growth. We present five findings that point to choroidal retinoic acid possibly being such a mediator. First, the chick choroid can convert retinol into all-trans-retinoic acid at the rate of 11 +/- 3 pmoles mg protein(-1) hr(-1), compared to 1.3 +/- 0.3 for retina/RPE and no conversion for sclera. Second, those visual conditions that cause increased rates of ocular elongation (diffusers or negative lens wear) produce a sharp decrease in all-trans-retinoic acid synthesis to levels barely detectable with our assay. In contrast, visual conditions which result in decreased rates of ocular elongation (recovery from diffusers or positive lens wear) produce a four- to five-fold increase in the formation of all-trans-retinoic acid. Third, the choroidal retinoic acid is found bound to a 28-32 kD protein. Fourth, a large fraction of the choroidal retinoic acid synthesized in culture is found in a nucleus-enriched fraction of sclera. Finally, application of retinoic acid to cultured sclera at physiological concentrations produced an inhibition of proteoglycan production (as assessed by measuring sulfate incorporation) with a EC50 of 8 x 10(-7) M. These results show that the synthesis of choroidal retinoic acid is modulated by those visual manipulations that influence ocular elongation and that this retinoic acid may reach the sclera in concentrations adequate to modulate scleral proteoglycan formation.
Effect of visual media use on school performance: a prospective study.
Sharif, Iman; Wills, Thomas A; Sargent, James D
2010-01-01
To identify mechanisms for the impact of visual media use on adolescents' school performance. We conducted a 24-month, four-wave longitudinal telephone study of a national sample of 6,486 youth aged 10 to 14 years. Exposure measures: latent construct for screen exposure time (weekday time spent viewing television/playing videogames, presence of television in bedroom) and variables for movie content (proportion of PG-13 and R movies viewed). self- and parent reports of grades in school. Effects of media exposures on change in school performance between baseline and 24 months were assessed using structural equation modeling. Information about hypothesized mediators (substance use, sensation seeking, and school problem behavior) was obtained at baseline and at the 16-month follow-up. Adjusted for baseline school performance, baseline levels of mediators, and a range of covariates, both screen exposure time and media content had adverse effects on change in school performance. Screen exposure had an indirect effect on poor school performance through increased sensation seeking. Viewing more PG-13 and R-rated movies had indirect effects on poor school performance mediated through increases in substance use and sensation seeking. R-rated viewing also had an indirect effect on poor school performance through increased school behavior problems. The effect sizes of exposure time and content on the intermediate variables and ultimately on school performance were similar to those for previously recognized determinants of these mediators, including household income, parenting style, and adolescents' self-control. These aspects of visual media use adversely affect school performance by increasing sensation seeking, substance use, and school problem behavior. Copyright 2010 Society for Adolescent Medicine. Published by Elsevier Inc. All rights reserved.
Effect of Visual Media Use on School Performance: A Prospective Study1
Sharif, Iman; Wills, Thomas A.; Sargent, James D.
2009-01-01
Purpose To identify mechanisms for the impact of visual media use on adolescents' school performance. Methods We conducted a 24-month, four-wave longitudinal telephone study of a national sample of 6,486 youth aged 10-14 years. Exposure Measures: Latent construct for screen exposure time (weekday time spent viewing television/playing videogames, presence of television in bedroom) and variables for movie content (proportion of PG13 and R movies viewed). Outcome Measure: Self and parent reports of grades in school. Effects of media exposures on change in school performance between baseline and 24 months were assessed using structural equation modeling. Information about hypothesized mediators (substance use, sensation-seeking, and school problem behavior) was obtained at baseline and at the16-month follow-up. Results Adjusted for baseline school performance, baseline levels of mediators, and a range of covariates, both screen exposure time and media content had adverse effects on change in school performance. Screen exposure had an indirect effect on poor school performance through increased sensation-seeking. Viewing more PG-13 and R-rated movies had indirect effects on poor school performance mediated through increases in substance use and sensation-seeking. R-rated viewing also had an indirect effect on poor school performance through increased school behavior problems. The effect sizes of exposure time and content on the intermediate variables and ultimately on school performance were similar to those for previously recognized determinants of these mediators – including household income, parenting style, and adolescents' self-control. Conclusions These aspects of visual media use adversely affect school performance by increasing sensation-seeking, substance use and school problem behavior. PMID:20123258
ERIC Educational Resources Information Center
Holmes, Scott A.; Heath, Matthew
2013-01-01
An issue of continued debate in the visuomotor control literature surrounds whether a 2D object serves as a representative proxy for a 3D object in understanding the nature of the visual information supporting grasping control. In an effort to reconcile this issue, we examined the extent to which aperture profiles for grasping 2D and 3D objects…
ERIC Educational Resources Information Center
Keri, Szabolcs; Szamosi, Andras; Benedek, Gyorgy; Kelemen, Oguz
2012-01-01
Paired associates learning is impaired in both schizophrenia and amnestic mild cognitive impairment (aMCI), which may reflect hippocampal pathology. In addition, schizophrenia is characterized by the dysfunction of the retino-geniculo-striatal magnocellular (M) visual pathway. The purpose of this study was to investigate the interaction between…
Direct Visualization of Wide Fusion-Fission Pores and Their Highly Varied Dynamics.
Eyring, Katherine W; Tsien, Richard W
2018-05-03
In this issue of Cell, Shin et al. report the first live-cell imaging of a fusion pore. Directly visualized pores in neuroendocrine cells can be much larger than expected yet not require vesicular full-collapse. These fusion-fission pores have diverse fates arising from opposing dynamin-driven pore constriction and F-actin-mediated pore expansion. Copyright © 2018. Published by Elsevier Inc.
Cao, Zengguo; Wang, Hualei; Wang, Lina; Li, Ling; Jin, Hongli; Xu, Changping; Feng, Na; Wang, Jianzhong; Li, Qian; Zhao, Yongkun; Wang, Tiecheng; Gao, Yuwei; Lu, Yiyu; Yang, Songtao; Xia, Xianzhu
2016-01-01
West Nile virus (WNV) causes a severe zoonosis, which can lead to a large number of casualties and considerable economic losses. A rapid and accurate identification method for WNV for use in field laboratories is urgently needed. Here, a method utilizing reverse transcription loop-mediated isothermal amplification combined with a vertical flow visualization strip (RT-LAMP-VF) was developed to detect the envelope (E) gene of WNV. The RT-LAMP-VF assay could detect 10(2) copies/μl of an WNV RNA standard using a 40 min amplification reaction followed by a 2 min incubation of the amplification product on the visualization strip, and no cross-reaction with other closely related members of the Flavivirus genus was observed. The assay was further evaluated using cells and mouse brain tissues infected with a recombinant rabies virus expressing the E protein of WNV. The assay produced sensitivities of 10(1.5) TCID50/ml and 10(1.33) TCID50/ml for detection of the recombinant virus in the cells and brain tissues, respectively. Overall, the RT-LAMP-VF assay developed in this study is rapid, simple and effective, and it is therefore suitable for clinical application in the field.
Charpentier, Corie L; Cohen, Jonathan H
2015-11-01
Several predator avoidance strategies in zooplankton rely on the use of light to control vertical position in the water column. Although light is the primary cue for such photobehavior, predator chemical cues or kairomones increase swimming responses to light. We currently lack a mechanistic understanding for how zooplankton integrate visual and chemical cues to mediate phenotypic plasticity in defensive photobehavior. In marine systems, kairomones are thought to be amino sugar degradation products of fish body mucus. Here, we demonstrate that increasing concentrations of fish kairomones heightened sensitivity of light-mediated swimming behavior for two larval crab species (Rhithropanopeus harrisii and Hemigrapsus sanguineus). Consistent with these behavioral results, we report increased visual sensitivity at the retinal level in larval crab eyes directly following acute (1-3 h) kairomone exposure, as evidenced electrophysiologically from V-log I curves and morphologically from wider, shorter rhabdoms. The observed increases in visual sensitivity do not correspond with a decline in temporal resolution, because latency in electrophysiological responses actually increased after kairomone exposure. Collectively, these data suggest that phenotypic plasticity in larval crab photobehavior is achieved, at least in part, through rapid changes in photoreceptor structure and function. © 2015. Published by The Company of Biologists Ltd.
Bastos, Andre M; Briggs, Farran; Alitto, Henry J; Mangun, George R; Usrey, W Martin
2014-05-28
Oscillatory synchronization of neuronal activity has been proposed as a mechanism to modulate effective connectivity between interacting neuronal populations. In the visual system, oscillations in the gamma-frequency range (30-100 Hz) are thought to subserve corticocortical communication. To test whether a similar mechanism might influence subcortical-cortical communication, we recorded local field potential activity from retinotopically aligned regions in the lateral geniculate nucleus (LGN) and primary visual cortex (V1) of alert macaque monkeys viewing stimuli known to produce strong cortical gamma-band oscillations. As predicted, we found robust gamma-band power in V1. In contrast, visual stimulation did not evoke gamma-band activity in the LGN. Interestingly, an analysis of oscillatory phase synchronization of LGN and V1 activity identified synchronization in the alpha (8-14 Hz) and beta (15-30 Hz) frequency bands. Further analysis of directed connectivity revealed that alpha-band interactions mediated corticogeniculate feedback processing, whereas beta-band interactions mediated geniculocortical feedforward processing. These results demonstrate that although the LGN and V1 display functional interactions in the lower frequency bands, gamma-band activity in the alert monkey is largely an emergent property of cortex. Copyright © 2014 the authors 0270-6474/14/347639-06$15.00/0.
Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.
Cognitive Psychology and Mathematical Thinking.
ERIC Educational Resources Information Center
Greer, Brian
1981-01-01
This review illustrates aspects of cognitive psychology relevant to the understanding of how people think mathematically. Developments in memory research, artificial intelligence, visually mediated processes, and problem-solving research are discussed. (MP)
Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation
Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina
2017-01-01
Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this “online” multisensory improvement, there is evidence of long-lasting, “offline” effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced “online” effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations. PMID:29326578
Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.
Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina
2017-01-01
Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations.
Wang, Chao; Rajagovindan, Rajasimhan; Han, Sahng-Min; Ding, Mingzhou
2016-01-01
Alpha oscillations (8–12 Hz) are thought to inversely correlate with cortical excitability. Goal-oriented modulation of alpha has been studied extensively. In visual spatial attention, alpha over the region of visual cortex corresponding to the attended location decreases, signifying increased excitability to facilitate the processing of impending stimuli. In contrast, in retention of verbal working memory, alpha over visual cortex increases, signifying decreased excitability to gate out stimulus input to protect the information held online from sensory interference. According to the prevailing model, this goal-oriented biasing of sensory cortex is effected by top-down control signals from frontal and parietal cortices. The present study tests and substantiates this hypothesis by (a) identifying the signals that mediate the top-down biasing influence, (b) examining whether the cortical areas issuing these signals are task-specific or task-independent, and (c) establishing the possible mechanism of the biasing action. High-density human EEG data were recorded in two experimental paradigms: a trial-by-trial cued visual spatial attention task and a modified Sternberg working memory task. Applying Granger causality to both sensor-level and source-level data we report the following findings. In covert visual spatial attention, the regions exerting top-down control over visual activity are lateralized to the right hemisphere, with the dipoles located at the right frontal eye field (FEF) and the right inferior frontal gyrus (IFG) being the main sources of top-down influences. During retention of verbal working memory, the regions exerting top-down control over visual activity are lateralized to the left hemisphere, with the dipoles located at the left middle frontal gyrus (MFG) being the main source of top-down influences. In both experiments, top-down influences are mediated by alpha oscillations, and the biasing effect is likely achieved via an inhibition-disinhibition mechanism. PMID:26834601
O'Modhrain, Sile; Giudice, Nicholas A; Gardner, John A; Legge, Gordon E
2015-01-01
This paper discusses issues of importance to designers of media for visually impaired users. The paper considers the influence of human factors on the effectiveness of presentation as well as the strengths and weaknesses of tactile, vibrotactile, haptic, and multimodal methods of rendering maps, graphs, and models. The authors, all of whom are visually impaired researchers in this domain, present findings from their own work and work of many others who have contributed to the current understanding of how to prepare and render images for both hard-copy and technology-mediated presentation of Braille and tangible graphics.
Phototaxis and the origin of visual eyes
Randel, Nadine
2016-01-01
Vision allows animals to detect spatial differences in environmental light levels. High-resolution image-forming eyes evolved from low-resolution eyes via increases in photoreceptor cell number, improvements in optics and changes in the neural circuits that process spatially resolved photoreceptor input. However, the evolutionary origins of the first low-resolution visual systems have been unclear. We propose that the lowest resolving (two-pixel) visual systems could initially have functioned in visual phototaxis. During visual phototaxis, such elementary visual systems compare light on either side of the body to regulate phototactic turns. Another, even simpler and non-visual strategy is characteristic of helical phototaxis, mediated by sensory–motor eyespots. The recent mapping of the complete neural circuitry (connectome) of an elementary visual system in the larva of the annelid Platynereis dumerilii sheds new light on the possible paths from non-visual to visual phototaxis and to image-forming vision. We outline an evolutionary scenario focusing on the neuronal circuitry to account for these transitions. We also present a comprehensive review of the structure of phototactic eyes in invertebrate larvae and assign them to the non-visual and visual categories. We propose that non-visual systems may have preceded visual phototactic systems in evolution that in turn may have repeatedly served as intermediates during the evolution of image-forming eyes. PMID:26598725
GABA predicts visual intelligence.
Cook, Emily; Hammett, Stephen T; Larsson, Jonas
2016-10-06
Early psychological researchers proposed a link between intelligence and low-level perceptual performance. It was recently suggested that this link is driven by individual variations in the ability to suppress irrelevant information, evidenced by the observation of strong correlations between perceptual surround suppression and cognitive performance. However, the neural mechanisms underlying such a link remain unclear. A candidate mechanism is neural inhibition by gamma-aminobutyric acid (GABA), but direct experimental support for GABA-mediated inhibition underlying suppression is inconsistent. Here we report evidence consistent with a global suppressive mechanism involving GABA underlying the link between sensory performance and intelligence. We measured visual cortical GABA concentration, visuo-spatial intelligence and visual surround suppression in a group of healthy adults. Levels of GABA were strongly predictive of both intelligence and surround suppression, with higher levels of intelligence associated with higher levels of GABA and stronger surround suppression. These results indicate that GABA-mediated neural inhibition may be a key factor determining cognitive performance and suggests a physiological mechanism linking surround suppression and intelligence. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Xu, Changping; Wang, Hualei; Jin, Hongli; Feng, Na; Zheng, Xuexing; Cao, Zengguo; Li, Ling; Wang, Jianzhong; Yan, Feihu; Wang, Lina; Chi, Hang; Gai, Weiwei; Wang, Chong; Zhao, Yongkun; Feng, Yan; Wang, Tiecheng; Gao, Yuwei; Lu, Yiyu; Yang, Songtao; Xia, Xianzhu
2016-05-01
Ebola virus (species Zaire ebolavirus) (EBOV) is highly virulent in humans. The largest recorded outbreak of Ebola hemorrhagic fever in West Africa to date was caused by EBOV. Therefore, it is necessary to develop a detection method for this virus that can be easily distributed and implemented. In the current study, we developed a visual assay that can detect EBOV-associated nucleic acids. This assay combines reverse transcription loop-mediated isothermal amplification and nucleic acid strip detection (RT-LAMP-NAD). Nucleic acid amplification can be achieved in a one-step process at a constant temperature (58 °C, 35 min), and the amplified products can be visualized within 2-5 min using a nucleic acid strip detection device. The assay is capable of detecting 30 copies of artificial EBOV glycoprotein (GP) RNA and RNA encoding EBOV GP from 10(2) TCID50 recombinant viral particles per ml with high specificity. Overall, the RT-LAMP-NAD method is simple and has high sensitivity and specificity; therefore, it is especially suitable for the rapid detection of EBOV in African regions.
Guerin, Scott A.; Robbins, Clifford A.; Gilmore, Adrian W.; Schacter, Daniel L.
2012-01-01
SUMMARY The interaction between episodic retrieval and visual attention is relatively unexplored. Given that systems mediating attention and episodic memory appear to be segregated, and perhaps even in competition, it is unclear how visual attention is recruited during episodic retrieval. We investigated the recruitment of visual attention during the suppression of gist-based false recognition, the tendency to falsely recognize items that are similar to previously encountered items. Recruitment of visual attention was associated with activity in the dorsal attention network. The inferior parietal lobule, often implicated in episodic retrieval, tracked veridical retrieval of perceptual detail and showed reduced activity during the engagement of visual attention, consistent with a competitive relationship with the dorsal attention network. These findings suggest that the contribution of the parietal cortex to interactions between visual attention and episodic retrieval entails distinct systems that contribute to different components of the task while also suppressing each other. PMID:22998879
Defining the cortical visual systems: "what", "where", and "how"
NASA Technical Reports Server (NTRS)
Creem, S. H.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)
2001-01-01
The visual system historically has been defined as consisting of at least two broad subsystems subserving object and spatial vision. These visual processing streams have been organized both structurally as two distinct pathways in the brain, and functionally for the types of tasks that they mediate. The classic definition by Ungerleider and Mishkin labeled a ventral "what" stream to process object information and a dorsal "where" stream to process spatial information. More recently, Goodale and Milner redefined the two visual systems with a focus on the different ways in which visual information is transformed for different goals. They relabeled the dorsal stream as a "how" system for transforming visual information using an egocentric frame of reference in preparation for direct action. This paper reviews recent research from psychophysics, neurophysiology, neuropsychology and neuroimaging to define the roles of the ventral and dorsal visual processing streams. We discuss a possible solution that allows for both "where" and "how" systems that are functionally and structurally organized within the posterior parietal lobe.
Color-selective attention need not be mediated by spatial attention.
Andersen, Søren K; Müller, Matthias M; Hillyard, Steven A
2009-06-08
It is well-established that attention can select stimuli for preferential processing on the basis of non-spatial features such as color, orientation, or direction of motion. Evidence is mixed, however, as to whether feature-selective attention acts by increasing the signal strength of to-be-attended features irrespective of their spatial locations or whether it acts by guiding the spotlight of spatial attention to locations containing the relevant feature. To address this question, we designed a task in which feature-selective attention could not be mediated by spatial selection. Participants observed a display of intermingled dots of two colors, which rapidly and unpredictably changed positions, with the task of detecting brief intervals of reduced luminance of 20% of the dots of one or the other color. Both behavioral indices and electrophysiological measures of steady-state visual evoked potentials showed selectively enhanced processing of the attended-color items. The results demonstrate that feature-selective attention produces a sensory gain enhancement at early levels of the visual cortex that occurs without mediation by spatial attention.
Bock, Otmar; Bury, Nils
2018-03-01
Our perception of the vertical corresponds to the weighted sum of gravicentric, egocentric, and visual cues. Here we evaluate the interplay of those cues not for the perceived but rather for the motor vertical. Participants were asked to flip an omnidirectional switch down while their egocentric vertical was dissociated from their visual-gravicentric vertical. Responses were directed mid-between the two verticals; specifically, the data suggest that the relative weight of congruent visual-gravicentric cues averages 0.62, and correspondingly, the relative weight of egocentric cues averages 0.38. We conclude that the interplay of visual-gravicentric cues with egocentric cues is similar for the motor and for the perceived vertical. Unexpectedly, we observed a consistent dependence of the motor vertical on hand position, possibly mediated by hand orientation or by spatial selective attention.
A new neural framework for visuospatial processing.
Kravitz, Dwight J; Saleem, Kadharbatcha S; Baker, Chris I; Mishkin, Mortimer
2011-04-01
The division of cortical visual processing into distinct dorsal and ventral streams is a key framework that has guided visual neuroscience. The characterization of the ventral stream as a 'What' pathway is relatively uncontroversial, but the nature of dorsal stream processing is less clear. Originally proposed as mediating spatial perception ('Where'), more recent accounts suggest it primarily serves non-conscious visually guided action ('How'). Here, we identify three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices. These three pathways support both conscious and non-conscious visuospatial processing, including spatial working memory, visually guided action and navigation, respectively.
Tao, Zhi-Yong; Zhou, Hua-Yun; Xia, Hui; Xu, Sui; Zhu, Han-Wu; Culleton, Richard L; Han, Eun-Taek; Lu, Feng; Fang, Qiang; Gu, Ya-Ping; Liu, Yao-Bao; Zhu, Guo-Ding; Wang, Wei-Ming; Li, Ju-Lin; Cao, Jun; Gao, Qi
2011-06-21
Loop-mediated isothermal amplification (LAMP) is a high performance method for detecting DNA and holds promise for use in the molecular detection of infectious pathogens, including Plasmodium spp. However, in most malaria-endemic areas, which are often resource-limited, current LAMP methods are not feasible for diagnosis due to difficulties in accurately interpreting results with problems of sensitive visualization of amplified products, and the risk of contamination resulting from the high quantity of amplified DNA produced. In this study, we establish a novel visualized LAMP method in a closed-tube system, and validate it for the diagnosis of malaria under simulated field conditions. A visualized LAMP method was established by the addition of a microcrystalline wax-dye capsule containing the highly sensitive DNA fluorescence dye SYBR Green I to a normal LAMP reaction prior to the initiation of the reaction. A total of 89 blood samples were collected on filter paper and processed using a simple boiling method for DNA extraction, and then tested by the visualized LAMP method for Plasmodium vivax infection. The wax capsule remained intact during isothermal amplification, and released the DNA dye to the reaction mixture only when the temperature was raised to the melting point following amplification. Soon after cooling down, the solidified wax sealed the reaction mix at the bottom of the tube, thus minimizing the risk of aerosol contamination. Compared to microscopy, the sensitivity and specificity of LAMP were 98.3% (95% confidence interval (CI): 91.1-99.7%) and 100% (95% CI: 88.3-100%), and were in close agreement with a nested polymerase chain reaction method. This novel, cheap and quick visualized LAMP method is feasible for malaria diagnosis in resource-limited field settings.
Baars, B J
1999-07-01
A common confound between consciousness and attention makes it difficult to think clearly about recent advances in the understanding of the visual brain. Visual consciousness involves phenomenal experience of the visual world, but visual attention is more plausibly treated as a function that selects and maintains the selection of potential conscious contents, often unconsciously. In the same sense, eye movements select conscious visual events, which are not the same as conscious visual experience. According to common sense, visual experience is consciousness, and selective processes are labeled as attention. The distinction is reflected in very different behavioral measures and in very different brain anatomy and physiology. Visual consciousness tends to be associated with the "what" stream of visual feature neurons in the ventral temporal lobe. In contrast, attentional selection and maintenance are mediated by other brain regions, ranging from superior colliculi to thalamus, prefrontal cortex, and anterior cingulate. The author applied the common-sense distinction between attention and consciousness to the theoretical positions of M. I. Posner (1992, 1994) and D. LaBerge (1997, 1998) to show how it helps to clarify the evidence. He concluded that clarity of thought is served by calling a thing by its proper name.
Intragroup Emotions: Physiological Linkage and Social Presence.
Järvelä, Simo; Kätsyri, Jari; Ravaja, Niklas; Chanel, Guillaume; Henttonen, Pentti
2016-01-01
We investigated how technologically mediating two different components of emotion-communicative expression and physiological state-to group members affects physiological linkage and self-reported feelings in a small group during video viewing. In different conditions the availability of second screen text chat (communicative expression) and visualization of group level physiological heart rates and their dyadic linkage (physiology) was varied. Within this four person group two participants formed a physically co-located dyad and the other two were individually situated in two separate rooms. We found that text chat always increased heart rate synchrony but HR visualization only with non-co-located dyads. We also found that physiological linkage was strongly connected to self-reported social presence. The results encourage further exploration of the possibilities of sharing group member's physiological components of emotion by technological means to enhance mediated communication and strengthen social presence.
Look At That! Video Chat and Joint Visual Attention Development Among Babies and Toddlers.
McClure, Elisabeth R; Chentsova-Dutton, Yulia E; Holochwost, Steven J; Parrott, W G; Barr, Rachel
2018-01-01
Although many relatives use video chat to keep in touch with toddlers, key features of adult-toddler interaction like joint visual attention (JVA) may be compromised in this context. In this study, 25 families with a child between 6 and 24 months were observed using video chat at home with geographically separated grandparents. We define two types of screen-mediated JVA (across- and within-screen) and report age-related increases in the babies' across-screen JVA initiations, and that family JVA usage was positively related to babies' overall attention during video calls. Babies today are immersed in a digital world where formative relationships are often mediated by a screen. Implications for both infant social development and developmental research are discussed. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.
Reality check: the role of realism in stress reduction using media technology.
de Kort, Y A W; Ijsselsteijn, W A
2006-04-01
There is a growing interest in the use of virtual and other mediated environments for therapeutic purposes. However, in the domain of restorative environments, virtual reality (VR) technology has hardly been used. Here the tendency has been to use mediated real environments, striving for maximum visual realism. This use of photographic material is mainly based on research in aesthetics judgments that has demonstrated the validity of this type of simulations as representations of real environments. Thus, restoration therapy is developing under the untested assumption that photorealistic images have the optimal level of realism, while in therapeutic applications 'experiential realism' seems to be the key rather than visual realism. The present paper discusses this contrast and briefly describes data of three studies aimed at exploring the importance and meaning of realism in the context of restorative environments.
Convective exosome-tracing microfluidics for analysis of cell-non-autonomous neurogenesis.
Oh, Hyun Jeong; Shin, Yoojin; Chung, Seok; Hwang, Do Won; Lee, Dong Soo
2017-01-01
The effective role of exosome delivering neurogenic microRNA (miRNA) enables to induce efficient differentiation process during neurogenesis. The microfludic system capable of visualizing the exosomal behavior such as secretion, migration, and uptake of individual exosomes can be used as a robust technique to understand the exosome-mediated change of cellular behavior. Here, we developed the exosome-tracing microfluidic system to visualize exosomal transport carrying the neurogenic miRNA from leading to neighboring cells, and found a new mode of exosome-mediated cell-non-autonomous neurogenesis. The miR-193a facilitated neurogenesis in F11 cells by blocking proliferation-related target genes. In addition to time-lapse live-cell imaging using microfluidics visualized the convective transport of exosomes from differentiated to undifferentiated cells. Individual exosomes containing miR-193a from differentiated donor cells were taken up by undifferentiated cells to lead them to neurogenesis. Induction of anti-miR-193a was sufficient to block neurogenesis in F11 cells. Inhibition of the exosomal production by manumycin-A and treatment of anti-miR-193a in the differentiated donor cells failed to induce neurogenesis in undifferentiated recipient cells. These findings indicate that exosomes of neural progenitors and neurogenic miRNA within these exosomes propagate cell-non-autonomous differentiation to neighboring progenitors, to delineate the roles of exosome mediating neurogenesis of population of homologous neural progenitor cells. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mathematics ability and related skills in preschoolers born very preterm.
Hasler, Holly M; Akshoomoff, Natacha
2017-12-12
Children born very preterm (VPT) are at risk for academic, behavioral, and/or emotional problems. Mathematics is a particular weakness and better understanding of the relationship between preterm birth and early mathematics ability is needed, particularly as early as possible to aid in early intervention. Preschoolers born VPT (n = 58) and those born full term (FT; n = 29) were administered a large battery of measures within 6 months of beginning kindergarten. A multiple-mediation model was utilized to characterize the difference in skills underlying mathematics ability between groups. Children born VPT performed significantly worse than FT-born children on a measure of mathematics ability as well as full-scale IQ, verbal skills, visual-motor integration, phonological awareness, phonological working memory, motor skills, and executive functioning. Mathematics was significantly correlated with verbal skills, visual-motor integration, phonological processing, and motor skills across both groups. When entered into the mediation model, verbal skills, visual-motor integration, and phonological awareness were significant mediators of the group differences. This analysis provides insights into the pre-academic skills that are weak in preschoolers born VPT and their relationship to mathematics. It is important to identify children who will have difficulties as early as possible, particularly for VPT children who are at higher risk for academic difficulties. Therefore, this model may be used in evaluating VPT children for emerging difficulties as well as an indicator that if other weaknesses are found, an assessment of mathematics should be conducted.
Algorithms and Sensors for Small Robot Path Following
NASA Technical Reports Server (NTRS)
Hogg, Robert W.; Rankin, Arturo L.; Roumeliotis, Stergios I.; McHenry, Michael C.; Helmick, Daniel M.; Bergh, Charles F.; Matthies, Larry
2002-01-01
Tracked mobile robots in the 20 kg size class are under development for applications in urban reconnaissance. For efficient deployment, it is desirable for teams of robots to be able to automatically execute path following behaviors, with one or more followers tracking the path taken by a leader. The key challenges to enabling such a capability are (l) to develop sensor packages for such small robots that can accurately determine the path of the leader and (2) to develop path following algorithms for the subsequent robots. To date, we have integrated gyros, accelerometers, compass/inclinometers, odometry, and differential GPS into an effective sensing package. This paper describes the sensor package, sensor processing algorithm, and path tracking algorithm we have developed for the leader/follower problem in small robots and shows the result of performance characterization of the system. We also document pragmatic lessons learned about design, construction, and electromagnetic interference issues particular to the performance of state sensors on small robots.
Terrain Model Registration for Single Cycle Instrument Placement
NASA Technical Reports Server (NTRS)
Deans, Matthew; Kunz, Clay; Sargent, Randy; Pedersen, Liam
2003-01-01
This paper presents an efficient and robust method for registration of terrain models created using stereo vision on a planetary rover. Our approach projects two surface models into a virtual depth map, rendering the models as they would be seen from a single range sensor. Correspondence is established based on which points project to the same location in the virtual range sensor. A robust norm of the deviations in observed depth is used as the objective function, and the algorithm searches for the rigid transformation which minimizes the norm. An initial coarse search is done using rover pose information from odometry and orientation sensing. A fine search is done using Levenberg-Marquardt. Our method enables a planetary rover to keep track of designated science targets as it moves, and to hand off targets from one set of stereo cameras to another. These capabilities are essential for the rover to autonomously approach a science target and place an instrument in contact in a single command cycle.
Schmoll, Conrad; Khan, Ashraf; Aspinall, Peter; Goudie, Colin; Koay, Peter; Tendo, Christelle; Cameron, James; Roe, Jenny; Deary, Ian; Dhillon, Bal
2014-01-01
Melanopsin-expressing photosensitive retinal ganglion cells form a blue-light-sensitive non-visual system mediating diverse physiological effects including circadian entrainment and cognitive alertness. Reduced blue wavelength retinal illumination through cataract formation is thought to blunt these responses while cataract surgery and intraocular lens (IOL) implantation have been shown to have beneficial effects on sleep and cognition. We aimed to use the reaction time (RT) task and the Epworth Sleepiness Score (ESS) as a validated objective platform to compare non-visual benefits of UV- and blue-blocking IOLs. Patients were prospectively randomised to receive either a UV- or blue-blocking IOL, performing an RT test and ESS questionnaire before and after surgery. Optical blurring at the second test controlled for visual improvement. Non-operative age-matched controls were recruited for comparison. 80 participants completed the study. Those undergoing first-eye phacoemulsification demonstrated significant improvements in RT over control (p=0.001) and second-eye surgery patients (p=0.03). Moreover, reduced daytime sleepiness was measured by ESS for the first-eye surgery group (p=0.008) but not for the second-eye group (p=0.09). Choice of UV- or blue-blocking IOL made no significant difference to magnitude of cognitive improvement (p=0.272). Phacoemulsification, particularly first-eye surgery, has a strong positive effect on cognition and daytime alertness, regardless of IOL type.
Swenor, Bonnielin K; Bandeen-Roche, Karen; Muñoz, Beatriz; West, Sheila K
2014-08-01
To determine whether performance speeds mediate the association between visual impairment and self-reported mobility disability over an 8-year period. Longitudinal analysis. Salisbury, Maryland. Salisbury Eye Evaluation Study participants aged 65 and older (N=2,520). Visual impairment was defined as best-corrected visual acuity worse than 20/40 in the better-seeing eye or visual field less than 20°. Self-reported mobility disability on three tasks was assessed: walking up stairs, walking down stairs, and walking 150 feet. Performance speed on three similar tasks was measured: walking up steps (steps/s), walking down steps (steps/s), and walking 4 m (m/s). For each year of observation, the odds of reporting mobility disability was significantly greater for participants who were visually impaired (VI) than for those who were not (NVI) (odds ratio (OR) difficulty walking up steps=1.58, 95% confidence interval (CI)=1.32-1.89; OR difficulty walking down steps=1.90, 95% CI=1.59-2.28; OR difficulty walking 150 feet=2.11, 95% CI=1.77-2.51). Once performance speed on a similar mobility task was included in the models, VI participants were no longer more likely to report mobility disability than those who were NVI (OR difficulty walking up steps=0.84, 95% CI=0.65-1.11; OR difficulty walking down steps=0.96, 95% CI=0.74-1.24; OR difficulty walking 150 feet=1.22, 95% CI=0.98-1.50). Slower performance speed in VI individuals largely accounted for the difference in the odds of reporting mobility disability, suggesting that VI older adults walk slower and are therefore more likely to report mobility disability than those who are NVI. Improving mobility performance in older adults with visual impairment may minimize the perception of mobility disability. © 2014, Copyright the Authors Journal compilation © 2014, The American Geriatrics Society.
de la Rosa, Stephan; Fademrecht, Laura; Bülthoff, Heinrich H; Giese, Martin A; Curio, Cristóbal
2018-06-01
Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.
Perceived visual speed constrained by image segmentation
NASA Technical Reports Server (NTRS)
Verghese, P.; Stone, L. S.
1996-01-01
Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.
Functional correlates of musical and visual ability in frontotemporal dementia.
Miller, B L; Boone, K; Cummings, J L; Read, S L; Mishkin, F
2000-05-01
The emergence of new skills in the setting of dementia suggests that loss of function in one brain area can release new functions elsewhere. To characterise 12 patients with frontotemporal dementia (FTD) who acquired, or sustained, new musical or visual abilities despite progression of their dementia. Twelve patients with FTD who acquired or maintained musical or artistic ability were compared with 46 patients with FTD in whom new or sustained ability was absent. The group with musical or visual ability performed better on visual, but worse on verbal tasks than did the other patients with FTD. Nine had asymmetrical left anterior dysfunction. Nine showed the temporal lobe variant of FTD. Loss of function in the left anterior temporal lobe may lead to facilitation of artistic or musical skills. Patients with the left-sided temporal lobe variant of FTD offer an unexpected window into the neurological mediation of visual and musical talents.
Theta coupling between V4 and prefrontal cortex predicts visual short-term memory performance.
Liebe, Stefanie; Hoerzer, Gregor M; Logothetis, Nikos K; Rainer, Gregor
2012-01-29
Short-term memory requires communication between multiple brain regions that collectively mediate the encoding and maintenance of sensory information. It has been suggested that oscillatory synchronization underlies intercortical communication. Yet, whether and how distant cortical areas cooperate during visual memory remains elusive. We examined neural interactions between visual area V4 and the lateral prefrontal cortex using simultaneous local field potential (LFP) recordings and single-unit activity (SUA) in monkeys performing a visual short-term memory task. During the memory period, we observed enhanced between-area phase synchronization in theta frequencies (3-9 Hz) of LFPs together with elevated phase locking of SUA to theta oscillations across regions. In addition, we found that the strength of intercortical locking was predictive of the animals' behavioral performance. This suggests that theta-band synchronization coordinates action potential communication between V4 and prefrontal cortex that may contribute to the maintenance of visual short-term memories.
Visual search performance among persons with schizophrenia as a function of target eccentricity.
Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M
2010-03-01
The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia. Copyright 2010 APA, all rights reserved
Aminergic neuromodulation of associative visual learning in harnessed honey bees.
Mancini, Nino; Giurfa, Martin; Sandoz, Jean-Christophe; Avarguès-Weber, Aurore
2018-05-21
The honey bee Apis mellifera is a major insect model for studying visual cognition. Free-flying honey bees learn to associate different visual cues with a sucrose reward and may deploy sophisticated cognitive strategies to this end. Yet, the neural bases of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but training them to respond appetitively to visual stimuli paired with sucrose reward is difficult. Here we succeeded in coupling visual conditioning in harnessed bees with pharmacological analyses on the role of octopamine (OA), dopamine (DA) and serotonin (5-HT) in visual learning. We also studied if and how these biogenic amines modulate sucrose responsiveness and phototaxis behaviour as intact reward and visual perception are essential prerequisites for appetitive visual learning. Our results suggest that both octopaminergic and dopaminergic signaling mediate either the appetitive sucrose signaling or the association between color and sucrose reward in the bee brain. Enhancing and inhibiting serotonergic signaling both compromised learning performances, probably via an impairment of visual perception. We thus provide a first analysis of the role of aminergic signaling in visual learning and retention in the honey bee and discuss further research trends necessary to understand the neural bases of visual cognition in this insect. Copyright © 2018 Elsevier Inc. All rights reserved.
Visual imagery and functional connectivity in blindness: a single-case study
Boucard, Christine C.; Rauschecker, Josef P.; Neufang, Susanne; Berthele, Achim; Doll, Anselm; Manoliu, Andrej; Riedl, Valentin; Sorg, Christian; Wohlschläger, Afra; Mühlau, Mark
2016-01-01
We present a case report on visual brain plasticity after total blindness acquired in adulthood. SH lost her sight when she was 27. Despite having been totally blind for 43 years, she reported to strongly rely on her vivid visual imagery. Three-Tesla magnetic resonance imaging (MRI) of SH and age-matched controls was performed. The MRI sequence included anatomical MRI, resting-state functional MRI, and task-related functional MRI where SH was instructed to imagine colours, faces, and motion. Compared to controls, voxel-based analysis revealed white matter loss along SH's visual pathway as well as grey matter atrophy in the calcarine sulci. Yet we demonstrated activation in visual areas, including V1, using functional MRI. Of the four identified visual resting-state networks, none showed alterations in spatial extent; hence, SH's preserved visual imagery seems to be mediated by intrinsic brain networks of normal extent. Time courses of two of these networks showed increased correlation with that of the inferior posterior default mode network, which may reflect adaptive changes supporting SH's strong internal visual representations. Overall, our findings demonstrate that conscious visual experience is possible even after years of absence of extrinsic input. PMID:25690326
Visual imagery and functional connectivity in blindness: a single-case study.
Boucard, Christine C; Rauschecker, Josef P; Neufang, Susanne; Berthele, Achim; Doll, Anselm; Manoliu, Andrej; Riedl, Valentin; Sorg, Christian; Wohlschläger, Afra; Mühlau, Mark
2016-05-01
We present a case report on visual brain plasticity after total blindness acquired in adulthood. SH lost her sight when she was 27. Despite having been totally blind for 43 years, she reported to strongly rely on her vivid visual imagery. Three-Tesla magnetic resonance imaging (MRI) of SH and age-matched controls was performed. The MRI sequence included anatomical MRI, resting-state functional MRI, and task-related functional MRI where SH was instructed to imagine colours, faces, and motion. Compared to controls, voxel-based analysis revealed white matter loss along SH's visual pathway as well as grey matter atrophy in the calcarine sulci. Yet we demonstrated activation in visual areas, including V1, using functional MRI. Of the four identified visual resting-state networks, none showed alterations in spatial extent; hence, SH's preserved visual imagery seems to be mediated by intrinsic brain networks of normal extent. Time courses of two of these networks showed increased correlation with that of the inferior posterior default mode network, which may reflect adaptive changes supporting SH's strong internal visual representations. Overall, our findings demonstrate that conscious visual experience is possible even after years of absence of extrinsic input.
Hongwarittorrn, Irin; Chaichanawongsaroj, Nuntaree; Laiwattanapaisal, Wanida
2017-12-01
A distance-based paper analytical device (dPAD) for loop mediated isothermal amplification (LAMP) detection based on distance measurement was proposed. This approach relied on visual detection by the length of colour developed on the dPAD with reference to semi-quantitative determination of the initial amount of genomic DNA. In this communication, E. coli DNA was chosen as a template DNA for LAMP reaction. In accordance with the principle, the dPAD was immobilized by polyethylenimine (PEI), which is a strong cationic polymer, in the hydrophilic channel of the paper device. Hydroxynaphthol blue (HNB), a colourimetric indicator for monitoring the change of magnesium ion concentration in the LAMP reaction, was used to react with the immobilized PEI. The positive charges of PEI react with the negative charges of free HNB in the LAMP reaction, producing a blue colour deposit on the paper device. Consequently, the apparently visual distance appeared within 5min and length of distance correlated to the amount of DNA in the sample. The distance-based PAD for the visual detection of the LAMP reaction could quantify the initial concentration of genomic DNA as low as 4.14 × 10 3 copiesµL -1 . This distance-based visual semi-quantitative platform is suitable for choice of LAMP detection method, particular in resource-limited settings because of the advantages of low cost, simple fabrication and operation, disposability and portable detection of the dPAD device. Copyright © 2017 Elsevier B.V. All rights reserved.
Visual-Vestibular Conflict Detection Depends on Fixation.
Garzorz, Isabelle T; MacNeilage, Paul R
2017-09-25
Visual and vestibular signals are the primary sources of sensory information for self-motion. Conflict among these signals can be seriously debilitating, resulting in vertigo [1], inappropriate postural responses [2], and motion, simulator, or cyber sickness [3-8]. Despite this significance, the mechanisms mediating conflict detection are poorly understood. Here we model conflict detection simply as crossmodal discrimination with benchmark performance limited by variabilities of the signals being compared. In a series of psychophysical experiments conducted in a virtual reality motion simulator, we measure these variabilities and assess conflict detection relative to this benchmark. We also examine the impact of eye movements on visual-vestibular conflict detection. In one condition, observers fixate a point that is stationary in the simulated visual environment by rotating the eyes opposite head rotation, thereby nulling retinal image motion. In another condition, eye movement is artificially minimized via fixation of a head-fixed fixation point, thereby maximizing retinal image motion. Visual-vestibular integration performance is also measured, similar to previous studies [9-12]. We observe that there is a tradeoff between integration and conflict detection that is mediated by eye movements. Minimizing eye movements by fixating a head-fixed target leads to optimal integration but highly impaired conflict detection. Minimizing retinal motion by fixating a scene-fixed target improves conflict detection at the cost of impaired integration performance. The common tendency to fixate scene-fixed targets during self-motion [13] may indicate that conflict detection is typically a higher priority than the increase in precision of self-motion estimation that is obtained through integration. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mars Science Laboratory Mission and Science Investigation
NASA Astrophysics Data System (ADS)
Grotzinger, John P.; Crisp, Joy; Vasavada, Ashwin R.; Anderson, Robert C.; Baker, Charles J.; Barry, Robert; Blake, David F.; Conrad, Pamela; Edgett, Kenneth S.; Ferdowski, Bobak; Gellert, Ralf; Gilbert, John B.; Golombek, Matt; Gómez-Elvira, Javier; Hassler, Donald M.; Jandura, Louise; Litvak, Maxim; Mahaffy, Paul; Maki, Justin; Meyer, Michael; Malin, Michael C.; Mitrofanov, Igor; Simmonds, John J.; Vaniman, David; Welch, Richard V.; Wiens, Roger C.
2012-09-01
Scheduled to land in August of 2012, the Mars Science Laboratory (MSL) Mission was initiated to explore the habitability of Mars. This includes both modern environments as well as ancient environments recorded by the stratigraphic rock record preserved at the Gale crater landing site. The Curiosity rover has a designed lifetime of at least one Mars year (˜23 months), and drive capability of at least 20 km. Curiosity's science payload was specifically assembled to assess habitability and includes a gas chromatograph-mass spectrometer and gas analyzer that will search for organic carbon in rocks, regolith fines, and the atmosphere (SAM instrument); an x-ray diffractometer that will determine mineralogical diversity (CheMin instrument); focusable cameras that can image landscapes and rock/regolith textures in natural color (MAHLI, MARDI, and Mastcam instruments); an alpha-particle x-ray spectrometer for in situ determination of rock and soil chemistry (APXS instrument); a laser-induced breakdown spectrometer to remotely sense the chemical composition of rocks and minerals (ChemCam instrument); an active neutron spectrometer designed to search for water in rocks/regolith (DAN instrument); a weather station to measure modern-day environmental variables (REMS instrument); and a sensor designed for continuous monitoring of background solar and cosmic radiation (RAD instrument). The various payload elements will work together to detect and study potential sampling targets with remote and in situ measurements; to acquire samples of rock, soil, and atmosphere and analyze them in onboard analytical instruments; and to observe the environment around the rover. The 155-km diameter Gale crater was chosen as Curiosity's field site based on several attributes: an interior mountain of ancient flat-lying strata extending almost 5 km above the elevation of the landing site; the lower few hundred meters of the mountain show a progression with relative age from clay-bearing to sulfate-bearing strata, separated by an unconformity from overlying likely anhydrous strata; the landing ellipse is characterized by a mixture of alluvial fan and high thermal inertia/high albedo stratified deposits; and a number of stratigraphically/geomorphically distinct fluvial features. Samples of the crater wall and rim rock, and more recent to currently active surface materials also may be studied. Gale has a well-defined regional context and strong evidence for a progression through multiple potentially habitable environments. These environments are represented by a stratigraphic record of extraordinary extent, and insure preservation of a rich record of the environmental history of early Mars. The interior mountain of Gale Crater has been informally designated at Mount Sharp, in honor of the pioneering planetary scientist Robert Sharp. The major subsystems of the MSL Project consist of a single rover (with science payload), a Multi-Mission Radioisotope Thermoelectric Generator, an Earth-Mars cruise stage, an entry, descent, and landing system, a launch vehicle, and the mission operations and ground data systems. The primary communication path for downlink is relay through the Mars Reconnaissance Orbiter. The primary path for uplink to the rover is Direct-from-Earth. The secondary paths for downlink are Direct-to-Earth and relay through the Mars Odyssey orbiter. Curiosity is a scaled version of the 6-wheel drive, 4-wheel steering, rocker bogie system from the Mars Exploration Rovers (MER) Spirit and Opportunity and the Mars Pathfinder Sojourner. Like Spirit and Opportunity, Curiosity offers three primary modes of navigation: blind-drive, visual odometry, and visual odometry with hazard avoidance. Creation of terrain maps based on HiRISE (High Resolution Imaging Science Experiment) and other remote sensing data were used to conduct simulated driving with Curiosity in these various modes, and allowed selection of the Gale crater landing site which requires climbing the base of a mountain to achieve its primary science goals. The Sample Acquisition, Processing, and Handling (SA/SPaH) subsystem is responsible for the acquisition of rock and soil samples from the Martian surface and the processing of these samples into fine particles that are then distributed to the analytical science instruments. The SA/SPaH subsystem is also responsible for the placement of the two contact instruments (APXS, MAHLI) on rock and soil targets. SA/SPaH consists of a robotic arm and turret-mounted devices on the end of the arm, which include a drill, brush, soil scoop, sample processing device, and the mechanical and electrical interfaces to the two contact science instruments. SA/SPaH also includes drill bit boxes, the organic check material, and an observation tray, which are all mounted on the front of the rover, and inlet cover mechanisms that are placed over the SAM and CheMin solid sample inlet tubes on the rover top deck.
A new neural framework for visuospatial processing
Kravitz, Dwight J.; Saleem, Kadharbatcha S.; Baker, Chris I.; Mishkin, Mortimer
2012-01-01
The division of cortical visual processing into distinct dorsal and ventral streams is a key framework that has guided visual neuroscience. The characterization of the ventral stream as a ‘What’ pathway is relatively uncontroversial, but the nature of dorsal stream processing is less clear. Originally proposed as mediating spatial perception (‘Where’), more recent accounts suggest it primarily serves non-conscious visually guided action (‘How’). Here, we identify three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices. These three pathways support both conscious and non-conscious visuospatial processing, including spatial working memory, visually guided action and navigation, respectively. PMID:21415848
Gholami, Tahereh; Pahlavian, Ahmad Heidari; Akbarzadeh, Mahdi; Motamedzade, Majid; Moghaddam, Rashid Heidari
2016-01-01
This study examined the hypothesis that burnout syndrome mediates effects of psychosocial risk factors and intensity of musculoskeletal disorders (MSDs) among hospital nurses. The sample was composed of 415 nurses from various wards across five hospitals of Iran's Hamedan University of Medical Sciences. Data were collected through three questionnaires: job content questionnaire, Maslach burnout inventory and visual analogue scale. Results of structural equation modeling with a mediating effect showed that psychosocial risk factors were significantly related to changes in burnout, which in turn affects intensity of MSDs.
Intragroup Emotions: Physiological Linkage and Social Presence
Järvelä, Simo; Kätsyri, Jari; Ravaja, Niklas; Chanel, Guillaume; Henttonen, Pentti
2016-01-01
We investigated how technologically mediating two different components of emotion—communicative expression and physiological state—to group members affects physiological linkage and self-reported feelings in a small group during video viewing. In different conditions the availability of second screen text chat (communicative expression) and visualization of group level physiological heart rates and their dyadic linkage (physiology) was varied. Within this four person group two participants formed a physically co-located dyad and the other two were individually situated in two separate rooms. We found that text chat always increased heart rate synchrony but HR visualization only with non-co-located dyads. We also found that physiological linkage was strongly connected to self-reported social presence. The results encourage further exploration of the possibilities of sharing group member's physiological components of emotion by technological means to enhance mediated communication and strengthen social presence. PMID:26903913
Ferber, Susanne; Emrich, Stephen M
2007-03-01
Segregation and feature binding are essential to the perception and awareness of objects in a visual scene. When a fragmented line-drawing of an object moves relative to a background of randomly oriented lines, the previously hidden object is segregated from the background and consequently enters awareness. Interestingly, in such shape-from-motion displays, the percept of the object persists briefly when the motion stops, suggesting that the segregated and bound representation of the object is maintained in awareness. Here, we tested whether this persistence effect is mediated by capacity-limited working-memory processes, or by the amount of object-related information available. The experiments demonstrate that persistence is affected mainly by the proportion of object information available and is independent of working-memory limits. We suggest that this persistence effect can be seen as evidence for an intermediate, form-based memory store mediating between sensory and working memory.
Davies, Patrick T; Coe, Jesse L; Hentges, Rochelle F; Sturge-Apple, Melissa L; van der Kloet, Erika
2018-03-01
This study examined the transactional interplay among children's negative family representations, visual processing of negative emotions, and externalizing symptoms in a sample of 243 preschool children (M age = 4.60 years). Children participated in three annual measurement occasions. Cross-lagged autoregressive models were conducted with multimethod, multi-informant data to identify mediational pathways. Consistent with schema-based top-down models, negative family representations were associated with attention to negative faces in an eye-tracking task and their externalizing symptoms. Children's negative representations of family relationships specifically predicted decreases in their attention to negative emotions, which, in turn, was associated with subsequent increases in their externalizing symptoms. Follow-up analyses indicated that the mediational role of diminished attention to negative emotions was particularly pronounced for angry faces. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.
Weighing the evidence for a dorsal processing bias under continuous flash suppression.
Ludwig, Karin; Hesselmann, Guido
2015-09-01
With the introduction of continuous flash suppression (CFS) as a method to render stimuli invisible and study unconscious visual processing, a novel hypothesis has gained popularity. It states that processes typically ascribed to the dorsal visual stream can escape CFS and remain functional, while ventral stream processes are suppressed when stimuli are invisible under CFS. This notion of a CFS-specific "dorsal processing bias" has been argued to be in line with core characteristics of the influential dual-stream hypothesis of visual processing which proposes a dissociation between dorsally mediated vision-for-action and ventrally mediated vision-for-perception. Here, we provide an overview of neuroimaging and behavioral studies that either examine this dorsal processing bias or base their conclusions on it. We show that both evidence for preserved ventral processing as well as lack of dorsal processing can be found in studies using CFS. To reconcile the diverging results, differences in the paradigms and their effects are worthy of future research. We conclude that given the current level of information a dorsal processing bias under CFS cannot be universally assumed. Copyright © 2014 Elsevier Inc. All rights reserved.
Effect of organizational strategy on visual memory in patients with schizophrenia.
Kim, Myung-Sun; Namgoong, Yoon; Youn, Tak
2008-08-01
The aim of the present study was to examine how copy organization mediated immediate recall among patients with schizophrenia using the Rey-Osterrieth Complex Figure Test (ROCF). The Boston Qualitative Scoring System (BQSS) was applied for qualitative and quantitative analyses of ROCF performances. Subjects included 20 patients with schizophrenia and 20 age- and gender-matched healthy controls. During the copy condition, the schizophrenia group and the control group differed in fragmentation; during the immediate recall condition, the two groups differed in configural presence and planning; and during the delayed recall condition, they differed in several qualitative measurements, including configural presence, cluster presence/placement, detail presence/placement, fragmentation, planning, and neatness. The two groups also differed in several quantitative measurements, including immediate presence and accuracy, immediate retention, delayed retention, and organization. Although organizational strategies used during the copy condition mediated the difference between the two groups during the immediate recall condition, group also had a significant direct effect on immediate recall. Schizophrenia patients are deficient in visual memory, and a piecemeal approach to the figure and organizational deficit seem to be related to the visual memory deficit. But schizophrenia patients also appeared to have some memory problems, including retention and/or retrieval deficits.
Edwards, Jerri D; Ruva, Christine L; O'Brien, Jennifer L; Haley, Christine B; Lister, Jennifer J
2013-06-01
The purpose of these analyses was to examine mediators of the transfer of cognitive speed of processing training to improved everyday functional performance (J. D. Edwards, V. G. Wadley,, D. E. Vance, D. L. Roenker, & K. K. Ball, 2005, The impact of speed of processing training on cognitive and everyday performance. Aging & Mental Health, 9, 262-271). Cognitive speed of processing and visual attention (as measured by the Useful Field of View Test; UFOV) were examined as mediators of training transfer. Secondary data analyses were conducted from the Staying Keen in Later Life (SKILL) study, a randomized cohort study including 126 community dwelling adults 63 to 87 years of age. In the SKILL study, participants were randomized to an active control group or cognitive speed of processing training (SOPT), a nonverbal, computerized intervention involving perceptual practice of visual tasks. Prior analyses found significant effects of training as measured by the UFOV and Timed Instrumental Activities of Daily Living (TIADL) Tests. Results from the present analyses indicate that speed of processing for a divided attention task significantly mediated the effect of SOPT on everyday performance (e.g., TIADL) in a multiple mediation model accounting for 91% of the variance. These findings suggest that everyday functional improvements found from SOPT are directly attributable to improved UFOV performance, speed of processing for divided attention in particular. Targeting divided attention in cognitive interventions may be important to positively affect everyday functioning among older adults. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Fu, Guanglei; Sanjay, Sharma T; Zhou, Wan; Brekken, Rolf A; Kirken, Robert A; Li, XiuJun
2018-05-01
The exploration of new physical and chemical properties of materials and their innovative application in different fields are of great importance to advance analytical chemistry, material science, and other important fields. Herein, we, for the first time, discovered the photothermal effect of an iron oxide nanoparticles (NPs)-mediated TMB (3,3',5,5'-tetramethylbenzidine)-H 2 O 2 colorimetric system, and applied it toward the development of a new NP-mediated photothermal immunoassay platform for visual quantitative biomolecule detection using a thermometer as the signal reader. Using a sandwich-type proof-of-concept immunoassay, we found that the charge transfer complex of the iron oxide NPs-mediated one-electron oxidation product of TMB (oxidized TMB) exhibited not only color changes, but also a strong near-infrared (NIR) laser-driven photothermal effect. Hence, oxidized TMB was explored as a new sensitive photothermal probe to convert the immunoassay signal into heat through the near-infrared laser-driven photothermal effect, enabling simple photothermal immunoassay using a thermometer. Based on the new iron oxide NPs-mediated TMB-H 2 O 2 photothermal immunoassay platform, prostate-specific antigen (PSA) as a model biomarker can be detected at a concentration as low as 1.0 ng·mL -1 in normal human serum. The discovered photothermal effect of the colorimetric system and the developed new photothermal immunoassay platform open up a new horizon for affordable detection of disease biomarkers and have great potential for other important material and biomedical applications of interest.
Crawford, H J; Allen, S N
1983-12-01
To investigate the hypothesis that hypnosis has an enhancing effect on imagery processing, as mediated by hypnotic responsiveness and cognitive strategies, four experiments compared performance of low and high, or low, medium, and high, hypnotically responsive subjects in waking and hypnosis conditions on a successive visual memory discrimination task that required detecting differences between successively presented picture pairs in which one member of the pair was slightly altered. Consistently, hypnotically responsive individuals showed enhanced performance during hypnosis, whereas nonresponsive ones did not. Hypnotic responsiveness correlated .52 (p less than .001) with enhanced performance during hypnosis, but it was uncorrelated with waking performance (Experiment 3). Reaction time was not affected by hypnosis, although high hypnotizables were faster than lows in their responses (Experiments 1 and 2). Subjects reported enhanced imagery vividness on the self-report Vividness of Visual Imagery Questionnaire during hypnosis. The differential effect between lows and highs was in the anticipated direction but not significant (Experiments 1 and 2). As anticipated, hypnosis had no significant effect on a discrimination task that required determining whether there were differences between pairs of simultaneously presented pictures. Two cognitive strategies that appeared to mediate visual memory performance were reported: (a) detail strategy, which involved the memorization and rehearsal of individual details for memory, and (b) holistic strategy, which involved looking at and remembering the whole picture with accompanying imagery. Both lows and highs reported similar predominantly detail-oriented strategies during waking; only highs shifted to a significantly more holistic strategy during hypnosis. These findings suggest that high hypnotizables have a greater capacity for cognitive flexibility (Batting, 1979) than do lows. Results are discussed in terms of several theoretical approaches: Paivio's (1971) dual-coding theory and Craik and Tulving's (1975) depth of processing theory. Additional discussion is given to the question of whether hypnosis involves a shift in cerebral dominance, as reflected by the cognitive strategy changes and enhanced imagery processing.
Towards Understanding the Role of Colour Information in Scene Perception using Night Vision Device
2009-06-01
possessing a visual system much simplified from that of living birds, reptiles, and teleost (bony) fish , which are generally tetrachromatic (Bowmaker...Levkowitz and Herman (1992) speculated that the results might be limited to “ blob ” detection. A possible mediating factor may have been the size and...sharpness of the “ blobs ” used in their task. Mullen (1985) showed that the visual system is much more sensitive to the 7 DSTO-RR-0345 high spatial
Lee, Jinwoo; Tong, Tiegang; Takemori, Hiroshi; Jefcoate, Colin
2015-06-15
In mouse steroidogenic cells the activation of cholesterol metabolism is mediated by steroidogenic acute regulatory protein (StAR). Here, we visualized a coordinated regulation of StAR transcription, splicing and post-transcriptional processing, which are synchronized by salt inducible kinase (SIK1) and CREB-regulated transcription coactivator (CRTC2). To detect primary RNA (pRNA), spliced primary RNA (Sp-RNA) and mRNA in single cells, we generated probe sets by using fluorescence in situ hybridization (FISH). These methods allowed us to address the nature of StAR gene expression and to visualize protein-nucleic acid interactions through direct detection. We show that SIK1 represses StAR expression in Y1 adrenal and MA10 testis cells through inhibition of processing mediated by CRTC2. Digital image analysis matches qPCR analyses of the total cell culture. Evidence is presented for spatially separate accumulation of StAR pRNA and Sp-RNA at the gene loci in the nucleus. These findings establish that cAMP, SIK and CRTC mediate StAR expression through activation of individual StAR gene loci. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Dscam2 mediates axonal tiling in the Drosophila visual system
Millard, S. Sean; Flanagan, John J.; Pappu, Kartik S.; Wu, Wei; Zipursky, S. Lawrence
2009-01-01
Sensory processing centres in both the vertebrate and the invertebrate brain are often organized into reiterated columns, thus facilitating an internal topographic representation of the external world. Cells within each column are arranged in a stereotyped fashion and form precise patterns of synaptic connections within discrete layers. These connections are largely confined to a single column, thereby preserving the spatial information from the periphery. Other neurons integrate this information by connecting to multiple columns. Restricting axons to columns is conceptually similar to tiling. Axons and dendrites of neighbouring neurons of the same class use tiling to form complete, yet non-overlapping, receptive fields1-3. It is thought that, at the molecular level, cell-surface proteins mediate tiling through contact-dependent repulsive interactions1,2,4,5, but proteins serving this function have not yet been identified. Here we show that the immunoglobulin superfamily member Dscam2 restricts the connections formed by L1 lamina neurons to columns in the Drosophila visual system. Our data support a model in which Dscam2 homophilic interactions mediate repulsion between neurites of L1 cells in neighbouring columns. We propose that Dscam2 is a tiling receptor for L1 neurons. PMID:17554308
Ingram, James N; Howard, Ian S; Flanagan, J Randall; Wolpert, Daniel M
2011-09-01
Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics, however, the representations can be engaged based on visual context, and are updated by a single-rate process.
Complex inhibitory microcircuitry regulates retinal signaling near visual threshold
Grimes, William N.; Zhang, Jun; Tian, Hua; Graydon, Cole W.; Hoon, Mrinalini; Rieke, Fred
2015-01-01
Neuronal microcircuits, small, localized signaling motifs involving two or more neurons, underlie signal processing and computation in the brain. Compartmentalized signaling within a neuron may enable it to participate in multiple, independent microcircuits. Each A17 amacrine cell in the mammalian retina contains within its dendrites hundreds of synaptic feedback microcircuits that operate independently to modulate feedforward signaling in the inner retina. Each of these microcircuits comprises a small (<1 μm) synaptic varicosity that typically receives one excitatory synapse from a presynaptic rod bipolar cell (RBC) and returns two reciprocal inhibitory synapses back onto the same RBC terminal. Feedback inhibition from the A17 sculpts the feedforward signal from the RBC to the AII, a critical component of the circuitry mediating night vision. Here, we show that the two inhibitory synapses from the A17 to the RBC express kinetically distinct populations of GABA receptors: rapidly activating GABAARs are enriched at one synapse while more slowly activating GABACRs are enriched at the other. Anatomical and electrophysiological data suggest that macromolecular complexes of voltage-gated (Cav) channels and Ca2+-activated K+ channels help to regulate GABA release from A17 varicosities and limit GABACR activation under certain conditions. Finally, we find that selective elimination of A17-mediated feedback inhibition reduces the signal to noise ratio of responses to dim flashes recorded in the feedforward pathway (i.e., the AII amacrine cell). We conclude that A17-mediated feedback inhibition improves the signal to noise ratio of RBC-AII transmission near visual threshold, thereby improving visual sensitivity at night. PMID:25972578
van Boxtel, M P; ten Tusscher, M P; Metsemakers, J F; Willems, B; Jolles, J
2001-10-01
It is unknown to what extent the performance on the Stroop color-word test is affected by reduced visual function in older individuals. We tested the impact of common deficiencies in visual function (reduced distant and close acuity, reduced contrast sensitivity, and color weakness) on Stroop performance among 821 normal individuals aged 53 and older. After adjustment for age, sex, and educational level, low contrast sensitivity was associated with more time needed on card I (word naming), red/green color weakness with slower card 2 performance (color naming), and reduced distant acuity with slower performance on card 3 (interference). Half of the age-related variance in speed performance was shared with visual function. The actual impact of reduced visual function may be underestimated in this study when some of this age-related variance in Stroop performance is mediated by visual function decrements. It is suggested that reduced visual function has differential effects on Stroop performance which need to be accounted for when the Stroop test is used both in research and in clinical settings. Stroop performance measured from older individuals with unknown visual status should be interpreted with caution.
NASA Astrophysics Data System (ADS)
Buck, Z.
2013-04-01
As we turn more and more to high-end computing to understand the Universe at cosmological scales, visualizations of simulations will take on a vital role as perceptual and cognitive tools. In collaboration with the Adler Planetarium and University of California High-Performance AstroComputing Center (UC-HiPACC), I am interested in better understanding the use of visualizations to mediate astronomy learning across formal and informal settings. The aspect of my research that I present here uses quantitative methods to investigate how learners are relying on color to interpret dark matter in a cosmology visualization. The concept of dark matter is vital to our current understanding of the Universe, and yet we do not know how to effectively present dark matter visually to support learning. I employ an alternative treatment post-test only experimental design, in which members of an equivalent sample are randomly assigned to one of three treatment groups, followed by treatment and a post-test. Results indicate significant correlation (p < .05) between the color of dark matter in the visualization and survey responses, implying that aesthetic variations like color can have a profound effect on audience interpretation of a cosmology visualization.
Sato, Makoto; Yasugi, Tetsuo; Minami, Yoshiaki; Miura, Takashi; Nagayama, Masaharu
2016-01-01
Notch-mediated lateral inhibition regulates binary cell fate choice, resulting in salt and pepper patterns during various developmental processes. However, how Notch signaling behaves in combination with other signaling systems remains elusive. The wave of differentiation in the Drosophila visual center or “proneural wave” accompanies Notch activity that is propagated without the formation of a salt and pepper pattern, implying that Notch does not form a feedback loop of lateral inhibition during this process. However, mathematical modeling and genetic analysis clearly showed that Notch-mediated lateral inhibition is implemented within the proneural wave. Because partial reduction in EGF signaling causes the formation of the salt and pepper pattern, it is most likely that EGF diffusion cancels salt and pepper pattern formation in silico and in vivo. Moreover, the combination of Notch-mediated lateral inhibition and EGF-mediated reaction diffusion enables a function of Notch signaling that regulates propagation of the wave of differentiation. PMID:27535937
Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming
2013-12-01
Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.
The role of ecological factors in shaping bat cone opsin evolution.
Gutierrez, Eduardo de A; Schott, Ryan K; Preston, Matthew W; Loureiro, Lívia O; Lim, Burton K; Chang, Belinda S W
2018-04-11
Bats represent one of the largest and most striking nocturnal mammalian radiations, exhibiting many visual system specializations for performance in light-limited environments. Despite representing the greatest ecological diversity and species richness in Chiroptera, Neotropical lineages have been undersampled in molecular studies, limiting the potential for identifying signatures of selection on visual genes associated with differences in bat ecology. Here, we investigated how diverse ecological pressures mediate long-term shifts in selection upon long-wavelength ( Lws ) and short-wavelength ( Sws1 ) opsins, photosensitive cone pigments that form the basis of colour vision in most mammals, including bats. We used codon-based likelihood clade models to test whether ecological variables associated with reliance on visual information (e.g. echolocation ability and diet) or exposure to varying light environments (e.g. roosting behaviour and foraging habitat) mediated shifts in evolutionary rates in bat cone opsin genes. Using additional cone opsin sequences from newly sequenced eye transcriptomes of six Neotropical bat species, we found significant evidence for different ecological pressures influencing the evolution of the cone opsins. While Lws is evolving under significantly lower constraint in highly specialized high-duty cycle echolocating lineages, which have enhanced sonar ability to detect and track targets, variation in Sws1 constraint was significantly associated with foraging habitat, exhibiting elevated rates of evolution in species that forage among vegetation. This suggests that increased reliance on echolocation as well as the spectral environment experienced by foraging bats may differentially influence the evolution of different cone opsins. Our study demonstrates that different ecological variables may underlie contrasting evolutionary patterns in bat visual opsins, and highlights the suitability of clade models for testing ecological hypotheses of visual evolution. © 2018 The Author(s).
Lagas, Alice K.; Black, Joanna M.; Byblow, Winston D.; Fleming, Melanie K.; Goodman, Lucy K.; Kydd, Robert R.; Russell, Bruce R.; Stinear, Cathy M.; Thompson, Benjamin
2016-01-01
The selective serotonin reuptake inhibitor fluoxetine significantly enhances adult visual cortex plasticity within the rat. This effect is related to decreased gamma-aminobutyric acid (GABA) mediated inhibition and identifies fluoxetine as a potential agent for enhancing plasticity in the adult human brain. We tested the hypothesis that fluoxetine would enhance visual perceptual learning of a motion direction discrimination (MDD) task in humans. We also investigated (1) the effect of fluoxetine on visual and motor cortex excitability and (2) the impact of increased GABA mediated inhibition following a single dose of triazolam on post-training MDD task performance. Within a double blind, placebo controlled design, 20 healthy adult participants completed a 19-day course of fluoxetine (n = 10, 20 mg per day) or placebo (n = 10). Participants were trained on the MDD task over the final 5 days of fluoxetine administration. Accuracy for the trained MDD stimulus and an untrained MDD stimulus configuration was assessed before and after training, after triazolam and 1 week after triazolam. Motor and visual cortex excitability were measured using transcranial magnetic stimulation. Fluoxetine did not enhance the magnitude or rate of perceptual learning and full transfer of learning to the untrained stimulus was observed for both groups. After training was complete, trazolam had no effect on trained task performance but significantly impaired untrained task performance. No consistent effects of fluoxetine on cortical excitability were observed. The results do not support the hypothesis that fluoxetine can enhance learning in humans. However, the specific effect of triazolam on MDD task performance for the untrained stimulus suggests that learning and learning transfer rely on dissociable neural mechanisms. PMID:27807412
Hoshi, Eiji
2013-01-01
Action is often executed according to information provided by a visual signal. As this type of behavior integrates two distinct neural representations, perception and action, it has been thought that identification of the neural mechanisms underlying this process will yield deeper insights into the principles underpinning goal-directed behavior. Based on a framework derived from conditional visuomotor association, prior studies have identified neural mechanisms in the dorsal premotor cortex (PMd), dorsolateral prefrontal cortex (dlPFC), ventrolateral prefrontal cortex (vlPFC), and basal ganglia (BG). However, applications resting solely on this conceptualization encounter problems related to generalization and flexibility, essential processes in executive function, because the association mode involves a direct one-to-one mapping of each visual signal onto a particular action. To overcome this problem, we extend this conceptualization and postulate a more general framework, conditional visuo-goal association. According to this new framework, the visual signal identifies an abstract behavioral goal, and an action is subsequently selected and executed to meet this goal. Neuronal activity recorded from the four key areas of the brains of monkeys performing a task involving conditional visuo-goal association revealed three major mechanisms underlying this process. First, visual-object signals are represented primarily in the vlPFC and BG. Second, all four areas are involved in initially determining the goals based on the visual signals, with the PMd and dlPFC playing major roles in maintaining the salience of the goals. Third, the cortical areas play major roles in specifying action, whereas the role of the BG in this process is restrictive. These new lines of evidence reveal that the four areas involved in conditional visuomotor association contribute to goal-directed behavior mediated by conditional visuo-goal association in an area-dependent manner. PMID:24155692
Brébion, Gildas; Stephan-Otto, Christian; Usall, Judith; Huerta-Ramos, Elena; Perez del Olmo, Mireia; Cuevas-Esteban, Jorge; Haro, Josep Maria; Ochoa, Susana
2015-09-01
A number of cognitive underpinnings of auditory hallucinations have been established in schizophrenia patients, but few have, as yet, been uncovered for visual hallucinations. In previous research, we unexpectedly observed that auditory hallucinations were associated with poor recognition of color, but not black-and-white (b/w), pictures. In this study, we attempted to replicate and explain this finding. Potential associations with visual hallucinations were explored. B/w and color pictures were presented to 50 schizophrenia patients and 45 healthy individuals under 2 conditions of visual context presentation corresponding to 2 levels of visual encoding complexity. Then, participants had to recognize the target pictures among distractors. Auditory-verbal hallucinations were inversely associated with the recognition of the color pictures presented under the most effortful encoding condition. This association was fully mediated by working-memory span. Visual hallucinations were associated with improved recognition of the color pictures presented under the less effortful condition. Patients suffering from visual hallucinations were not impaired, relative to the healthy participants, in the recognition of these pictures. Decreased working-memory span in patients with auditory-verbal hallucinations might impede the effortful encoding of stimuli. Visual hallucinations might be associated with facilitation in the visual encoding of natural scenes, or with enhanced color perception abilities. (c) 2015 APA, all rights reserved).
Goodale, M A; Murison, R C
1975-05-02
The effects of bilateral removal of the superior colliculus or visual cortex on visually guided locomotor movements in rats performing a brightness discrimination task were investigated directly with the use of cine film. Rats with collicular lesions showed patterns of locomotion comparable to or more efficient than those of normal animals when approaching one of 5 small doors located at one end of a large open area. In contrast, animals with large but incomplete lesions of visual cortex were distinctly impaired in their visual control of approach responses to the same stimuli. On the other hand, rats with collicular damage showed no orienting reflex or evidence of distraction in the same task when novel visual or auditory stimuli were presented. However, both normal and visual-decorticate rats showed various components of the orienting reflex and disturbance in task performance when the same novel stimuli were presented. These results suggest that although the superior colliculus does not appear to be essential to the visual control of locomotor orientation, this midbrain structure might participate in the mediation of shifts in visual fixation and attention. Visual cortex, while contributing to visuospatial guidance of locomotor movements, might not play a significant role in the control and integration of the orienting reflex.
NASA Astrophysics Data System (ADS)
Garcia-Belmonte, Germà
2017-06-01
Spatial visualization is a well-established topic of education research that has allowed improving science and engineering students' skills on spatial relations. Connections have been established between visualization as a comprehension tool and instruction in several scientific fields. Learning about dynamic processes mainly relies upon static spatial representations or images. Visualization of time is inherently problematic because time can be conceptualized in terms of two opposite conceptual metaphors based on spatial relations as inferred from conventional linguistic patterns. The situation is particularly demanding when time-varying signals are recorded using displaying electronic instruments, and the image should be properly interpreted. This work deals with the interplay between linguistic metaphors, visual thinking and scientific instrument mediation in the process of interpreting time-varying signals displayed by electronic instruments. The analysis draws on a simplified version of a communication system as example of practical signal recording and image visualization in a physics and engineering laboratory experience. Instrumentation delivers meaningful signal representations because it is designed to incorporate a specific and culturally favored time view. It is suggested that difficulties in interpreting time-varying signals are linked with the existing dual perception of conflicting time metaphors. The activation of specific space-time conceptual mapping might allow for a proper signal interpretation. Instruments play then a central role as visualization mediators by yielding an image that matches specific perception abilities and practical purposes. Here I have identified two ways of understanding time as used in different trajectories through which students are located. Interestingly specific displaying instruments belonging to different cultural traditions incorporate contrasting time views. One of them sees time in terms of a dynamic metaphor consisting of a static observer looking at passing events. This is a general and widespread practice common in the contemporary mass culture, which lies behind the process of making sense to moving images usually visualized by means of movie shots. In contrast scientific culture favored another way of time conceptualization (static time metaphor) that historically fostered the construction of graphs and the incorporation of time-dependent functions, as represented on the Cartesian plane, into displaying instruments. Both types of cultures, scientific and mass, are considered highly technological in the sense that complex instruments, apparatus or machines participate in their visual practices.
Iconic Factors and Language Word Order
ERIC Educational Resources Information Center
Moeser, Shannon Dawn
1975-01-01
College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)
Comparison of Multidimensional Decoding of Affect from Audio, Video and Audiovideo Recordings
ERIC Educational Resources Information Center
Berman, Harry J.; And Others
1976-01-01
Some encoders showed variations in feelings principally through visually mediated stimuli, others through the tone of the voice. These results are discussed in the context of quantitative versus qualitative differences among the communication channels. (Author/DEP)
Visual habit formation in monkeys with neurotoxic lesions of the ventrocaudal neostriatum
Fernandez-Ruiz, Juan; Wang, Jin; Aigner, Thomas G.; Mishkin, Mortimer
2001-01-01
Visual habit formation in monkeys, assessed by concurrent visual discrimination learning with 24-h intertrial intervals (ITI), was found earlier to be impaired by removal of the inferior temporal visual area (TE) but not by removal of either the medial temporal lobe or inferior prefrontal convexity, two of TE's major projection targets. To assess the role in this form of learning of another pair of structures to which TE projects, namely the rostral portion of the tail of the caudate nucleus and the overlying ventrocaudal putamen, we injected a neurotoxin into this neostriatal region of several monkeys and tested them on the 24-h ITI task as well as on a test of visual recognition memory. Compared with unoperated monkeys, the experimental animals were unaffected on the recognition test but showed an impairment on the 24-h ITI task that was highly correlated with the extent of their neostriatal damage. The findings suggest that TE and its projection areas in the ventrocaudal neostriatum form part of a circuit that selectively mediates visual habit formation. PMID:11274442
Eye-catching odors: olfaction elicits sustained gazing to faces and eyes in 4-month-old infants.
Durand, Karine; Baudouin, Jean-Yves; Lewkowicz, David J; Goubet, Nathalie; Schaal, Benoist
2013-01-01
This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.
Decoding visual object categories in early somatosensory cortex.
Smith, Fraser W; Goodale, Melvyn A
2015-04-01
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. © The Author 2013. Published by Oxford University Press.
Decoding Visual Object Categories in Early Somatosensory Cortex
Smith, Fraser W.; Goodale, Melvyn A.
2015-01-01
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. PMID:24122136
Vision, touch and object manipulation in Senegal parrots Poicephalus senegalus
Demery, Zoe P.; Chappell, Jackie; Martin, Graham R.
2011-01-01
Parrots are exceptional among birds for their high levels of exploratory behaviour and manipulatory abilities. It has been argued that foraging method is the prime determinant of a bird's visual field configuration. However, here we argue that the topography of visual fields in parrots is related to their playful dexterity, unique anatomy and particularly the tactile information that is gained through their bill tip organ during object manipulation. We measured the visual fields of Senegal parrots Poicephalus senegalus using the ophthalmoscopic reflex technique and also report some preliminary observations on the bill tip organ in this species. We found that the visual fields of Senegal parrots are unlike those described hitherto in any other bird species, with both a relatively broad frontal binocular field and a near comprehensive field of view around the head. The behavioural implications are discussed and we consider how extractive foraging and object exploration, mediated in part by tactile cues from the bill, has led to the absence of visual coverage of the region below the bill in favour of more comprehensive visual coverage above the head. PMID:21525059
Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon
2014-11-01
Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.
Recognition memory is modulated by visual similarity.
Yago, Elena; Ishai, Alumit
2006-06-01
We used event-related fMRI to test whether recognition memory depends on visual similarity between familiar prototypes and novel exemplars. Subjects memorized portraits, landscapes, and abstract compositions by six painters with a unique style, and later performed a memory recognition task. The prototypes were presented with new exemplars that were either visually similar or dissimilar. Behaviorally, novel, dissimilar items were detected faster and more accurately. We found activation in a distributed cortical network that included face- and object-selective regions in the visual cortex, where familiar prototypes evoked stronger responses than new exemplars; attention-related regions in parietal cortex, where responses elicited by new exemplars were reduced with decreased similarity to the prototypes; and the hippocampus and memory-related regions in parietal and prefrontal cortices, where stronger responses were evoked by the dissimilar exemplars. Our findings suggest that recognition memory is mediated by classification of novel exemplars as a match or a mismatch, based on their visual similarity to familiar prototypes.
Doesburg, Sam M.; Moiseev, Alexander; Herdman, Anthony T.; Ribary, Urs; Grunau, Ruth E.
2013-01-01
Children born very preterm (≤32 weeks gestational age) without major intellectual or neurological impairments often express selective deficits in visual-perceptual abilities. The alterations in neurophysiological development underlying these problems, however, remain poorly understood. Recent research has indicated that spontaneous alpha oscillations are slowed in children born very preterm, and that atypical alpha-mediated functional network connectivity may underlie selective developmental difficulties in visual-perceptual ability in this group. The present study provides the first source-resolved analysis of slowing of spontaneous alpha oscillations in very preterm children, indicating alterations in a distributed set of brain regions concentrated in areas of posterior parietal and inferior temporal regions associated with visual perception, as well as prefrontal cortical regions and thalamus. We also uniquely demonstrate that slowing of alpha oscillations is associated with selective difficulties in visual-perceptual ability in very preterm children. These results indicate that region-specific slowing of alpha oscillations contribute to selective developmental difficulties prevalent in this population. PMID:24298250
Implicit recognition based on lateralized perceptual fluency.
Vargas, Iliana M; Voss, Joel L; Paller, Ken A
2012-02-06
In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this "implicit recognition" results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention encoding. Presentation in the same visual hemifield at test produced higher recognition accuracy than presentation in the opposite visual hemifield, but only for guess responses. These correct guesses likely reflect a contribution from implicit recognition, given that when the stimulated visual hemifield was the same at study and test, recognition accuracy was higher for guess responses than for responses with any level of confidence. The dramatic difference in guessing accuracy as a function of lateralized perceptual overlap between study and test suggests that implicit recognition arises from memory storage in visual cortical networks that mediate repetition-induced fluency increments.
Goodhew, Stephanie C; Lawrence, Rebecca K; Edwards, Mark
2017-05-01
There are volumes of information available to process in visual scenes. Visual spatial attention is a critically important selection mechanism that prevents these volumes from overwhelming our visual system's limited-capacity processing resources. We were interested in understanding the effect of the size of the attended area on visual perception. The prevailing model of attended-region size across cognition, perception, and neuroscience is the zoom-lens model. This model stipulates that the magnitude of perceptual processing enhancement is inversely related to the size of the attended region, such that a narrow attended-region facilitates greater perceptual enhancement than a wider region. Yet visual processing is subserved by two major visual pathways (magnocellular and parvocellular) that operate with a degree of independence in early visual processing and encode contrasting visual information. Historically, testing of the zoom-lens has used measures of spatial acuity ideally suited to parvocellular processing. This, therefore, raises questions about the generality of the zoom-lens model to different aspects of visual perception. We found that while a narrow attended-region facilitated spatial acuity and the perception of high spatial frequency targets, it had no impact on either temporal acuity or the perception of low spatial frequency targets. This pattern also held up when targets were not presented centrally. This supports the notion that visual attended-region size has dissociable effects on magnocellular versus parvocellular mediated visual processing.
Chen, Xia; Fu, Junhong; Cheng, Wenbo; Song, Desheng; Qu, Xiaolei; Yang, Zhuo; Zhao, Kanxing
2017-01-01
Visual deprivation during the critical period induces long-lasting changes in cortical circuitry by adaptively modifying neuro-transmission and synaptic connectivity at synapses. Spike timing-dependent plasticity (STDP) is considered a strong candidate for experience-dependent changes. However, the visual deprivation forms that affect timing-dependent long-term potentiation(LTP) and long-term depression(LTD) remain unclear. Here, we demonstrated the temporal window changes of tLTP and tLTD, elicited by coincidental pre- and post-synaptic firing, following different modes of 6-day visual deprivation. Markedly broader temporal windows were found in robust tLTP and tLTD in the V1M of the deprived visual cortex in mice after 6-day MD and DE. The underlying mechanism for the changes seen with visual deprivation in juvenile mice using 6 days of dark exposure or monocular lid suture involves an increased fraction of NR2b-containing NMDAR and the consequent prolongation of NMDAR-mediated response duration. Moreover, a decrease in NR2A protein expression at the synapse is attributable to the reduction of the NR2A/2B ratio in the deprived cortex. PMID:28520739
Nonretinotopic visual processing in the brain.
Melcher, David; Morrone, Maria Concetta
2015-01-01
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Heppe, Eline C M; Kef, Sabina; Schuengel, Carlo
2015-11-05
Social participation is challenging for people with visual impairments. As a result, on average, social networks are smaller, romantic relationships formed later, educational achievements lower, and career prospects limited. Adolescents on their way towards achieving these goals may benefit from the knowledge and experience of adults who have overcome similar difficulties. Therefore, a mentoring intervention, called Mentor Support, will be set up and studied in which adolescents with visual impairments are matched with successfully social participating adults with and without visual impairments. The main objective of this study is to evaluate the effectiveness of Mentor Support. Secondary aims are to distinguish the importance of the disability-specific experience of mentors, predictors of success, and mediating factors. The effect of Mentor Support will be tested in a randomized clinical trial, using pre-test one week before starting, post-test after 12 months, and follow-up after 18 months. Participants will be referred to one of the experimental groups or the control group, and this randomization will be stratified according to country region. Three groups are included in the trial: 40 participants will receive Mentor Support by mentors with a visual impairment in combination with care-as-usual, 40 participants will receive Mentor Support by mentors without visual impairments in combination with care-as-usual, and 40 participants will receive care-as-usual only. Mentor Support consists of 12 face-to-face meetings of the mentee with a mentor with an overall time period of one year. On a weekly basis, dyads have contact via email, the Internet, or telephone. The primary outcome measure is improved social participation within three domains (work/school, leisure activities, and social relationships). Mediator variables are psychosocial functioning and self-determination. Predictors such as demographics and personality are also investigated in order to distinguish the pathways to successful social participation. Intention-to-treat and completer analyses will be conducted. The primary outcomes of this trial regard increased social participation. The study may yield insights to further improve effects of support programs to adolescents with visual impairments. Netherlands Trial Register NTR4768 (registered 4 September 2014).
Kibby, Michelle Y.; Dyer, Sarah M.; Vadnais, Sarah A.; Jagger, Audreyana C.; Casher, Gabriel A.; Stacy, Maria
2015-01-01
Whether visual processing deficits are common in reading disorders (RD), and related to reading ability in general, has been debated for decades. The type of visual processing affected also is debated, although visual discrimination and short-term memory (STM) may be more commonly related to reading ability. Reading disorders are frequently comorbid with ADHD, and children with ADHD often have subclinical reading problems. Hence, children with ADHD were used as a comparison group in this study. ADHD and RD may be dissociated in terms of visual processing. Whereas RD may be associated with deficits in visual discrimination and STM for order, ADHD is associated with deficits in visual-spatial processing. Thus, we hypothesized that children with RD would perform worse than controls and children with ADHD only on a measure of visual discrimination and a measure of visual STM that requires memory for order. We expected all groups would perform comparably on the measure of visual STM that does not require sequential processing. We found children with RD or ADHD were commensurate to controls on measures of visual discrimination and visual STM that do not require sequential processing. In contrast, both RD groups (RD, RD/ADHD) performed worse than controls on the measure of visual STM that requires memory for order, and children with comorbid RD/ADHD performed worse than those with ADHD. In addition, of the three visual measures, only sequential visual STM predicted reading ability. Hence, our findings suggest there is a deficit in visual sequential STM that is specific to RD and is related to basic reading ability. The source of this deficit is worthy of further research, but it may include both reduced memory for order and poorer verbal mediation. PMID:26579020
Kibby, Michelle Y; Dyer, Sarah M; Vadnais, Sarah A; Jagger, Audreyana C; Casher, Gabriel A; Stacy, Maria
2015-01-01
Whether visual processing deficits are common in reading disorders (RD), and related to reading ability in general, has been debated for decades. The type of visual processing affected also is debated, although visual discrimination and short-term memory (STM) may be more commonly related to reading ability. Reading disorders are frequently comorbid with ADHD, and children with ADHD often have subclinical reading problems. Hence, children with ADHD were used as a comparison group in this study. ADHD and RD may be dissociated in terms of visual processing. Whereas RD may be associated with deficits in visual discrimination and STM for order, ADHD is associated with deficits in visual-spatial processing. Thus, we hypothesized that children with RD would perform worse than controls and children with ADHD only on a measure of visual discrimination and a measure of visual STM that requires memory for order. We expected all groups would perform comparably on the measure of visual STM that does not require sequential processing. We found children with RD or ADHD were commensurate to controls on measures of visual discrimination and visual STM that do not require sequential processing. In contrast, both RD groups (RD, RD/ADHD) performed worse than controls on the measure of visual STM that requires memory for order, and children with comorbid RD/ADHD performed worse than those with ADHD. In addition, of the three visual measures, only sequential visual STM predicted reading ability. Hence, our findings suggest there is a deficit in visual sequential STM that is specific to RD and is related to basic reading ability. The source of this deficit is worthy of further research, but it may include both reduced memory for order and poorer verbal mediation.
Spiegel, Daniel P.; Hansen, Bruce C.; Byblow, Winston D.; Thompson, Benjamin
2012-01-01
Transcranial direct current stimulation (tDCS) is a safe, non-invasive technique for transiently modulating the balance of excitation and inhibition within the human brain. It has been reported that anodal tDCS can reduce both GABA mediated inhibition and GABA concentration within the human motor cortex. As GABA mediated inhibition is thought to be a key modulator of plasticity within the adult brain, these findings have broad implications for the future use of tDCS. It is important, therefore, to establish whether tDCS can exert similar effects within non-motor brain areas. The aim of this study was to assess whether anodal tDCS could reduce inhibitory interactions within the human visual cortex. Psychophysical measures of surround suppression were used as an index of inhibition within V1. Overlay suppression, which is thought to originate within the lateral geniculate nucleus (LGN), was also measured as a control. Anodal stimulation of the occipital poles significantly reduced psychophysical surround suppression, but had no effect on overlay suppression. This effect was specific to anodal stimulation as cathodal stimulation had no effect on either measure. These psychophysical results provide the first evidence for tDCS-induced reductions of intracortical inhibition within the human visual cortex. PMID:22563485
A relational structure of voluntary visual-attention abilities
Skogsberg, KatieAnn; Grabowecky, Marcia; Wilt, Joshua; Revelle, William; Iordanescu, Lucica; Suzuki, Satoru
2015-01-01
Many studies have examined attention mechanisms involved in specific behavioral tasks (e.g., search, tracking, distractor inhibition). However, relatively little is known about the relationships among those attention mechanisms. Is there a fundamental attention faculty that makes a person superior or inferior at most types of attention tasks, or do relatively independent processes mediate different attention skills? We focused on individual differences in voluntary visual-attention abilities using a battery of eleven representative tasks. An application of parallel analysis, hierarchical-cluster analysis, and multidimensional scaling to the inter-task correlation matrix revealed four functional clusters, representing spatiotemporal attention, global attention, transient attention, and sustained attention, organized along two dimensions, one contrasting spatiotemporal and global attention and the other contrasting transient and sustained attention. Comparison with the neuroscience literature suggests that the spatiotemporal-global dimension corresponds to the dorsal frontoparietal circuit and the transient-sustained dimension corresponds to the ventral frontoparietal circuit, with distinct sub-regions mediating the separate clusters within each dimension. We also obtained highly specific patterns of gender difference, and of deficits for college students with elevated ADHD traits. These group differences suggest that different mechanisms of voluntary visual attention can be selectively strengthened or weakened based on genetic, experiential, and/or pathological factors. PMID:25867505
Chen, Xiaoyun; Wang, Xiaofu; Jin, Nuo; Zhou, Yu; Huang, Sainan; Miao, Qingmei; Zhu, Qing; Xu, Junfeng
2012-11-07
Genetically modified (GM) rice KMD1, TT51-1, and KF6 are three of the most well known transgenic Bt rice lines in China. A rapid and sensitive molecular assay for risk assessment of GM rice is needed. Polymerase chain reaction (PCR), currently the most common method for detecting genetically modified organisms, requires temperature cycling and relatively complex procedures. Here we developed a visual and rapid loop-mediated isothermal amplification (LAMP) method to amplify three GM rice event-specific junction sequences. Target DNA was amplified and visualized by two indicators (SYBR green or hydroxy naphthol blue [HNB]) within 60 min at an isothermal temperature of 63 °C. Different kinds of plants were selected to ensure the specificity of detection and the results of the non-target samples were negative, indicating that the primer sets for the three GM rice varieties had good levels of specificity. The sensitivity of LAMP, with detection limits at low concentration levels (0.01%−0.005% GM), was 10- to 100-fold greater than that of conventional PCR. Additionally, the LAMP assay coupled with an indicator (SYBR green or HNB) facilitated analysis. These findings revealed that the rapid detection method was suitable as a simple field-based test to determine the status of GM crops.
Furl, N; van Rijsbergen, N J; Treves, A; Dolan, R J
2007-08-01
Previous studies have shown reductions of the functional magnetic resonance imaging (fMRI) signal in response to repetition of specific visual stimuli. We examined how adaptation affects the neural responses associated with categorization behavior, using face adaptation aftereffects. Adaptation to a given facial category biases categorization towards non-adapted facial categories in response to presentation of ambiguous morphs. We explored a hypothesis, posed by recent psychophysical studies, that these adaptation-induced categorizations are mediated by activity in relatively advanced stages within the occipitotemporal visual processing stream. Replicating these studies, we find that adaptation to a facial expression heightens perception of non-adapted expressions. Using comparable behavioral methods, we also show that adaptation to a specific identity heightens perception of a second identity in morph faces. We show both expression and identity effects to be associated with heightened anterior medial temporal lobe activity, specifically when perceiving the non-adapted category. These regions, incorporating bilateral anterior ventral rhinal cortices, perirhinal cortex and left anterior hippocampus are regions previously implicated in high-level visual perception. These categorization effects were not evident in fusiform or occipital gyri, although activity in these regions was reduced to repeated faces. The findings suggest that adaptation-induced perception is mediated by activity in regions downstream to those showing reductions due to stimulus repetition.
Wang, Cong; Li, Rong; Quan, Sheng; Shen, Ping; Zhang, Dabing; Shi, Jianxin; Yang, Litao
2015-06-01
Isothermal DNA/RNA amplification techniques are the primary methodology for developing on-spot rapid nucleic acid amplification assays, and the loop-mediated isothermal amplification (LAMP) technique has been developed and applied in the detection of foodborne pathogens, plant/animal viruses, and genetically modified (GM) food/feed contents. In this study, one set of LAMP assays targeting on eight frequently used universal elements, marker genes, and exogenous target genes, such as CaMV35S promoter, FMV35S promoter, NOS, bar, cry1Ac, CP4 epsps, pat, and NptII, were developed for visual screening of GM contents in plant-derived food samples with high efficiency and accuracy. For these eight LAMP assays, their specificity was evaluated by testing commercial GM plant events and their limits of detection were also determined, which are 10 haploid genome equivalents (HGE) for FMV35S promoter, cry1Ac, and pat assays, as well as five HGE for CaMV35S promoter, bar, NOS terminator, CP4 epsps, and NptII assays. The screening applicability of these LAMP assays was further validated successfully using practical canola, soybean, and maize samples. The results suggested that the established visual LAMP assays are applicable and cost-effective for GM screening in plant-derived food samples.
Visual detection of multiple genetically modified organisms in a capillary array.
Shao, Ning; Chen, Jianwei; Hu, Jiaying; Li, Rong; Zhang, Dabing; Guo, Shujuan; Hui, Junhou; Liu, Peng; Yang, Litao; Tao, Sheng-Ce
2017-01-31
There is an urgent need for rapid, low-cost multiplex methodologies for the monitoring of genetically modified organisms (GMOs). Here, we report a C[combining low line]apillary A[combining low line]rray-based L[combining low line]oop-mediated isothermal amplification for M[combining low line]ultiplex visual detection of nucleic acids (CALM) platform for the simple and rapid monitoring of GMOs. In CALM, loop-mediated isothermal amplification (LAMP) primer sets are pre-fixed to the inner surface of capillaries. The surface of the capillary array is hydrophobic while the capillaries are hydrophilic, enabling the simultaneous loading and separation of the LAMP reaction mixtures into each capillary by capillary forces. LAMP reactions in the capillaries are then performed in parallel, and the results are visually detected by illumination with a hand-held UV device. Using CALM, we successfully detected seven frequently used transgenic genes/elements and five plant endogenous reference genes with high specificity and sensitivity. Moreover, we found that measurements of real-world blind samples by CALM are consistent with results obtained by independent real-time PCRs. Thus, with an ability to detect multiple nucleic acids in a single easy-to-operate test, we believe that CALM will become a widely applied technology in GMO monitoring.
Recognition Decisions From Visual Working Memory Are Mediated by Continuous Latent Strengths.
Ricker, Timothy J; Thiele, Jonathan E; Swagman, April R; Rouder, Jeffrey N
2017-08-01
Making recognition decisions often requires us to reference the contents of working memory, the information available for ongoing cognitive processing. As such, understanding how recognition decisions are made when based on the contents of working memory is of critical importance. In this work we examine whether recognition decisions based on the contents of visual working memory follow a continuous decision process of graded information about the correct choice or a discrete decision process reflecting only knowing and guessing. We find a clear pattern in favor of a continuous latent strength model of visual working memory-based decision making, supporting the notion that visual recognition decision processes are impacted by the degree of matching between the contents of working memory and the choices given. Relation to relevant findings and the implications for human information processing more generally are discussed. Copyright © 2016 Cognitive Science Society, Inc.
Strabismus and the Oculomotor System: Insights from Macaque Models
Das, Vallabh E.
2017-01-01
Disrupting binocular vision in infancy leads to strabismus and oftentimes to a variety of associated visual sensory deficits and oculomotor abnormalities. Investigation of this disorder has been aided by the development of various animal models, each of which has advantages and disadvantages. In comparison to studies of binocular visual responses in cortical structures, investigations of neural oculomotor structures that mediate the misalignment and abnormalities of eye movements have been more recent, and these studies have shown that different brain areas are intimately involved in driving several aspects of the strabismic condition, including horizontal misalignment, dissociated deviations, A and V patterns of strabismus, disconjugate eye movements, nystagmus, and fixation switch. The responses of cells in visual and oculomotor areas that potentially drive the sensory deficits and also eye alignment and eye movement abnormalities follow a general theme of disrupted calibration, lower sensitivity, and poorer specificity compared with the normally developed visual oculomotor system. PMID:28532347
Response-dependent dynamics of cell-specific inhibition in cortical networks in vivo
El-Boustani, Sami; Sur, Mriganka
2014-01-01
In the visual cortex, inhibitory neurons alter the computations performed by target cells via combination of two fundamental operations, division and subtraction. The origins of these operations have been variously ascribed to differences in neuron classes, synapse location or receptor conductances. Here, by utilizing specific visual stimuli and single optogenetic probe pulses, we show that the function of parvalbumin-expressing and somatostatin-expressing neurons in mice in vivo is governed by the overlap of response timing between these neurons and their targets. In particular, somatostatin-expressing neurons respond at longer latencies to small visual stimuli compared with their target neurons and provide subtractive inhibition. With large visual stimuli, however, they respond at short latencies coincident with their target cells and switch to provide divisive inhibition. These results indicate that inhibition mediated by these neurons is a dynamic property of cortical circuits rather than an immutable property of neuronal classes. PMID:25504329
The onset of visual experience gates auditory cortex critical periods
Mowery, Todd M.; Kotak, Vibhakar C.; Sanes, Dan H.
2016-01-01
Sensory systems influence one another during development and deprivation can lead to cross-modal plasticity. As auditory function begins before vision, we investigate the effect of manipulating visual experience during auditory cortex critical periods (CPs) by assessing the influence of early, normal and delayed eyelid opening on hearing loss-induced changes to membrane and inhibitory synaptic properties. Early eyelid opening closes the auditory cortex CPs precociously and dark rearing prevents this effect. In contrast, delayed eyelid opening extends the auditory cortex CPs by several additional days. The CP for recovery from hearing loss is also closed prematurely by early eyelid opening and extended by delayed eyelid opening. Furthermore, when coupled with transient hearing loss that animals normally fully recover from, very early visual experience leads to inhibitory deficits that persist into adulthood. Finally, we demonstrate a functional projection from the visual to auditory cortex that could mediate these effects. PMID:26786281
The eye as metronome of the body.
Lubkin, Virginia; Beizai, Pouneh; Sadun, Alfredo A
2002-01-01
Vision is much more than just resolving small objects. In fact, the eye sends visual information to the brain that is not consciously perceived. One such pathway entails visual information to the hypothalamus. The retinohypothalamic tract (RHT) mediates light entrainment of circadian rhythms. Retinofugal fibers project to several nuclei of the hypothalamus. These and further projections to the pineal via the sympathetic system provide the anatomical substrate for the neuro-endocrine control of diurnal and longer rhythms. Without the influence of light and dark, many rhythms desynchronize and exhibit free-running periods of approximately 24.2-24.9 hours in humans. This review will demonstrate the mechanism by which the RHT synchronizes circadian rhythms and the importance of preserving light perception in those persons with impending visual loss.
Chromosomal rearrangement segregating with adrenoleukodystrophy: associated changes in color vision.
Alpern, M; Sack, G H; Krantz, D H; Jenness, J; Zhang, H; Moser, H W
1993-01-01
A patient from a large kindred with adrenoleukodystrophy showed profound disturbance of color ordering, color matching, increment thresholds, and luminosity. Except for color matching, his performance was similar to blue-cone "monochromacy," an X chromosome-linked recessive retinal dystrophy in which color vision is dichromatic, mediated by the visual pigments of rods and short-wave-sensitive cones. Color matching, however, indicated that an abnormal rudimentary visual pigment was also present. This may reflect the presence of a recombinant visual pigment protein or altered regulation of residual pigment genes, due to DNA changes--deletion of the long-wave pigment gene and reorganized sequences 5' to the pigment gene cluster--that segregate with the metabolic defect in this kindred. PMID:8415729
Dissecting contributions of prefrontal cortex and fusiform face area to face working memory.
Druzgal, T Jason; D'Esposito, Mark
2003-08-15
Interactions between prefrontal cortex (PFC) and stimulus-specific visual cortical association areas are hypothesized to mediate visual working memory in behaving monkeys. To clarify the roles for homologous regions in humans, event-related fMRI was used to assess neural activity in PFC and fusiform face area (FFA) of subjects performing a delay-recognition task for faces. In both PFC and FFA, activity increased parametrically with memory load during encoding and maintenance of face stimuli, despite quantitative differences in the magnitude of activation. Moreover, timing differences in PFC and FFA activation during memory encoding and retrieval implied a context dependence in the flow of neural information. These results support existing neurophysiological models of visual working memory developed in the nonhuman primate.
Power spectrum model of visual masking: simulations and empirical data.
Serrano-Pedraza, Ignacio; Sierra-Vázquez, Vicente; Derrington, Andrew M
2013-06-01
In the study of the spatial characteristics of the visual channels, the power spectrum model of visual masking is one of the most widely used. When the task is to detect a signal masked by visual noise, this classical model assumes that the signal and the noise are previously processed by a bank of linear channels and that the power of the signal at threshold is proportional to the power of the noise passing through the visual channel that mediates detection. The model also assumes that this visual channel will have the highest ratio of signal power to noise power at its output. According to this, there are masking conditions where the highest signal-to-noise ratio (SNR) occurs in a channel centered in a spatial frequency different from the spatial frequency of the signal (off-frequency looking). Under these conditions the channel mediating detection could vary with the type of noise used in the masking experiment and this could affect the estimation of the shape and the bandwidth of the visual channels. It is generally believed that notched noise, white noise and double bandpass noise prevent off-frequency looking, and high-pass, low-pass and bandpass noises can promote it independently of the channel's shape. In this study, by means of a procedure that finds the channel that maximizes the SNR at its output, we performed numerical simulations using the power spectrum model to study the characteristics of masking caused by six types of one-dimensional noise (white, high-pass, low-pass, bandpass, notched, and double bandpass) for two types of channel's shape (symmetric and asymmetric). Our simulations confirm that (1) high-pass, low-pass, and bandpass noises do not prevent the off-frequency looking, (2) white noise satisfactorily prevents the off-frequency looking independently of the shape and bandwidth of the visual channel, and interestingly we proved for the first time that (3) notched and double bandpass noises prevent off-frequency looking only when the noise cutoffs around the spatial frequency of the signal match the shape of the visual channel (symmetric or asymmetric) involved in the detection. In order to test the explanatory power of the model with empirical data, we performed six visual masking experiments. We show that this model, with only two free parameters, fits the empirical masking data with high precision. Finally, we provide equations of the power spectrum model for six masking noises used in the simulations and in the experiments.
Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico
2012-07-24
The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual perception, sensory integration, recognition of movement, re-mapping on the somatosensory and motor cortex, storage in memory, and response control. Results from the congruent vs. incongruent trials revealed greater activity for the former condition than the latter in a network including cingulate cortex, right inferior and middle frontal gyrus that are involved in the go-signal and in decision control. Results on healthy subjects would suggest the appropriateness of an abstract visual feedback provided during motor training. The task contributes to highlight the potential of fMRI in improving the understanding of visual motor processes and may also be useful in detecting brain reorganisation during training.
Anatomy and physiology of the afferent visual system.
Prasad, Sashank; Galetta, Steven L
2011-01-01
The efficient organization of the human afferent visual system meets enormous computational challenges. Once visual information is received by the eye, the signal is relayed by the retina, optic nerve, chiasm, tracts, lateral geniculate nucleus, and optic radiations to the striate cortex and extrastriate association cortices for final visual processing. At each stage, the functional organization of these circuits is derived from their anatomical and structural relationships. In the retina, photoreceptors convert photons of light to an electrochemical signal that is relayed to retinal ganglion cells. Ganglion cell axons course through the optic nerve, and their partial decussation in the chiasm brings together corresponding inputs from each eye. Some inputs follow pathways to mediate pupil light reflexes and circadian rhythms. However, the majority of inputs arrive at the lateral geniculate nucleus, which relays visual information via second-order neurons that course through the optic radiations to arrive in striate cortex. Feedback mechanisms from higher cortical areas shape the neuronal responses in early visual areas, supporting coherent visual perception. Detailed knowledge of the anatomy of the afferent visual system, in combination with skilled examination, allows precise localization of neuropathological processes and guides effective diagnosis and management of neuro-ophthalmic disorders. Copyright © 2011 Elsevier B.V. All rights reserved.
Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A
2013-06-01
Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.
Size-Sensitive Perceptual Representations Underlie Visual and Haptic Object Recognition
Craddock, Matt; Lawson, Rebecca
2009-01-01
A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations. PMID:19956685
The role of prestimulus activity in visual extinction☆
Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl
2013-01-01
Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398
The role of prestimulus activity in visual extinction.
Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl
2013-07-01
Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Stringham, James M; Garcia, Paul V; Smith, Peter A; McLin, Leon N; Foutch, Brian K
2011-09-22
One theory of macular pigment's (MP) presence in the fovea is to improve visual performance in glare. This study sought to determine the effect of MP level on three aspects of visual performance in glare: photostress recovery, disability glare, and visual discomfort. Twenty-six subjects participated in the study. Spatial profiles of MP optical density were assessed with heterochromatic flicker photometry. Glare was delivered via high-bright-white LEDs. For the disability glare and photostress recovery portions of the experiment, the visual task consisted of correct identification of a 1° Gabor patch's orientation. Visual discomfort during the glare presentation was assessed with a visual discomfort rating scale. Pupil diameter was monitored with an infrared (IR) camera. MP level correlated significantly with all the outcome measures. Higher MP optical densities (MPODs) resulted in faster photostress recovery times (average P < 0.003), lower disability glare contrast thresholds (average P < 0.004), and lower visual discomfort (P = 0.002). Smaller pupil diameter during glare presentation significantly correlated with higher visual discomfort ratings (P = 0.037). MP correlates with three aspects of visual performance in glare. Unlike previous studies of MP and glare, the present study used free-viewing conditions, in which effects of iris pigmentation and pupil size could be accounted for. The effects described, therefore, can be extended more confidently to real-world, practical visual performance benefits. Greater iris constriction resulted (paradoxically) in greater visual discomfort. This finding may be attributable to the neurobiologic mechanism that mediates the pain elicited by light.
Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David
2014-01-22
Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.
Descending pathways controlling visually guided updating of reaching in cats.
Pettersson, L-G; Perfiliev, S
2002-10-01
This study uses a previously described paradigm (Pettersson et al., 1997) to investigate the ability of cats to change the direction of ongoing reaching when the target is shifted sideways; the effect on the switching latency of spinal cord lesions was investigated. Large ventral lesions transecting the ventral funicle and the ventral half of the lateral funicle gave a 20-30 ms latency prolongation of switching in the medial (right) direction, but less prolongation of switching directed laterally (left), and in one cat the latencies of switching directed laterally were unchanged. It may be inferred that the command for switching in the lateral direction can be mediated by the dorsally located cortico- and rubrospinal tracts whereas the command for short-latency switching in the medial direction is mediated by ventral pathways. A restricted ventral lesion transecting the tectospinal pathway did not change the switching latency. Comparison of different ventral lesions revealed prolongation of the latency if the lesion included a region extending dorsally along the ventral horn and from there ventrally as a vertical strip, so it may be postulated that the command for fast switching, directed medially, is mediated by a reticulospinal pathway within this location. A hypothesis is forwarded suggesting that the visual control is exerted via ponto-cerebellar pathways.
Bourqui, Romain; Benchimol, William; Gaspin, Christine; Sirand-Pugnet, Pascal; Uricaru, Raluca; Dutour, Isabelle
2015-01-01
The revolution in high-throughput sequencing technologies has enabled the acquisition of gigabytes of RNA sequences in many different conditions and has highlighted an unexpected number of small RNAs (sRNAs) in bacteria. Ongoing exploitation of these data enables numerous applications for investigating bacterial transacting sRNA-mediated regulation networks. Focusing on sRNAs that regulate mRNA translation in trans, recent works have noted several sRNA-based regulatory pathways that are essential for key cellular processes. Although the number of known bacterial sRNAs is increasing, the experimental validation of their interactions with mRNA targets remains challenging and involves expensive and time-consuming experimental strategies. Hence, bioinformatics is crucial for selecting and prioritizing candidates before designing any experimental work. However, current software for target prediction produces a prohibitive number of candidates because of the lack of biological knowledge regarding the rules governing sRNA–mRNA interactions. Therefore, there is a real need to develop new approaches to help biologists focus on the most promising predicted sRNA–mRNA interactions. In this perspective, this review aims at presenting the advantages of mixing bioinformatics and visualization approaches for analyzing predicted sRNA-mediated regulatory bacterial networks. PMID:25477348
Thébault, Patricia; Bourqui, Romain; Benchimol, William; Gaspin, Christine; Sirand-Pugnet, Pascal; Uricaru, Raluca; Dutour, Isabelle
2015-09-01
The revolution in high-throughput sequencing technologies has enabled the acquisition of gigabytes of RNA sequences in many different conditions and has highlighted an unexpected number of small RNAs (sRNAs) in bacteria. Ongoing exploitation of these data enables numerous applications for investigating bacterial transacting sRNA-mediated regulation networks. Focusing on sRNAs that regulate mRNA translation in trans, recent works have noted several sRNA-based regulatory pathways that are essential for key cellular processes. Although the number of known bacterial sRNAs is increasing, the experimental validation of their interactions with mRNA targets remains challenging and involves expensive and time-consuming experimental strategies. Hence, bioinformatics is crucial for selecting and prioritizing candidates before designing any experimental work. However, current software for target prediction produces a prohibitive number of candidates because of the lack of biological knowledge regarding the rules governing sRNA-mRNA interactions. Therefore, there is a real need to develop new approaches to help biologists focus on the most promising predicted sRNA-mRNA interactions. In this perspective, this review aims at presenting the advantages of mixing bioinformatics and visualization approaches for analyzing predicted sRNA-mediated regulatory bacterial networks. © The Author 2014. Published by Oxford University Press.
Nzelu, Chukwunonso O; Gomez, Eduardo A; Cáceres, Abraham G; Sakurai, Tatsuya; Martini-Robles, Luiggi; Uezato, Hiroshi; Mimori, Tatsuyuki; Katakura, Ken; Hashiguchi, Yoshihisa; Kato, Hirotomo
2014-04-01
Entomological monitoring of Leishmania infection in leishmaniasis endemic areas offers epidemiologic advantages for predicting the risk and expansion of the disease, as well as evaluation of the effectiveness of control programs. In this study, we developed a highly sensitive loop-mediated isothermal amplification (LAMP) method for the mass screening of sand flies for Leishmania infection based on the 18S rRNA gene. The LAMP technique could detect 0.01 parasites, which was more sensitive than classical PCR. The method was robust and could amplify the target DNA within 1h from a crude sand fly template without DNA purification. Amplicon detection could be accomplished by the newly developed colorimetric malachite green (MG)--mediated naked eye visualization. Pre-addition of MG to the LAMP reaction solution did not inhibit amplification efficiency. The field applicability of the colorimetric MG-based LAMP assay was demonstrated with 397 field-caught samples from the endemic areas of Ecuador and eight positive sand flies were detected. The robustness, superior sensitivity, and ability to produce better visual discriminatory reaction products than existing LAMP fluorescence and turbidity assays indicated the field potential usefulness of this new method for surveillance and epidemiological studies of leishmaniasis in developing countries. Copyright © 2013 Elsevier B.V. All rights reserved.
Williams, David M; Bowler, Dermot M; Jarrold, Christopher
2012-02-01
Evidence regarding the use of inner speech by individuals with autism spectrum disorder (ASD) is equivocal. To clarify this issue, the current study employed multiple techniques and tasks used across several previous studies. In Experiment 1, participants with and without ASD showed highly similar patterns and levels of serial recall for visually presented stimuli. Both groups were significantly affected by the phonological similarity of items to be recalled, indicating that visual material was spontaneously recoded into a verbal form. Confirming that short-term memory is typically verbally mediated among the majority of people with ASD, recall performance among both groups declined substantially when inner speech use was prevented by the imposition of articulatory suppression during the presentation of stimuli. In Experiment 2, planning performance on a tower of London task was substantially detrimentally affected by articulatory suppression among comparison participants, but not among participants with ASD. This suggests that planning is not verbally mediated in ASD. It is important that the extent to which articulatory suppression affected planning among participants with ASD was uniquely associated with the degree of their observed and self-reported communication impairments. This confirms a link between interpersonal communication with others and intrapersonal communication with self as a means of higher order problem solving.
Mappes, Martina; Homberg, Uwe
2007-01-01
Many insects can detect the polarization pattern of the blue sky and rely on polarization vision for sky compass orientation. In laboratory experiments, tethered flying locusts perform periodic changes in flight behavior under a slowly rotating polarizer even if one eye is painted black. Anatomical tracing studies and intracellular recordings have suggested that the polarization vision pathway in the locust brain involves the anterior optic tract and tubercle, the lateral accessory lobe, and the central complex of the brain. To investigate whether visual pathways through the anterior optic tract mediate polarotaxis in the desert locust, we transected the tract on one side and tested polarotaxis (1) with both eyes unoccluded and (2) with the eye of the intact hemisphere painted black. In the second group of animals, but not in the first group, polarotaxis was abolished. Sham operations did not impair polarotaxis. The experiments show that the anterior optic tract is an indispensable part of visual pathways mediating polarotaxis in the desert locust.
Hillairet de Boisferon, Anne; Tift, Amy H; Minar, Nicholas J; Lewkowicz, David J
2017-05-01
Previous studies have found that infants shift their attention from the eyes to the mouth of a talker when they enter the canonical babbling phase after 6 months of age. Here, we investigated whether this increased attentional focus on the mouth is mediated by audio-visual synchrony and linguistic experience. To do so, we tracked eye gaze in 4-, 6-, 8-, 10-, and 12-month-old infants while they were exposed either to desynchronized native or desynchronized non-native audiovisual fluent speech. Results indicated that, regardless of language, desynchronization disrupted the usual pattern of relative attention to the eyes and mouth found in response to synchronized speech at 10 months but not at any other age. These findings show that audio-visual synchrony mediates selective attention to a talker's mouth just prior to the emergence of initial language expertise and that it declines in importance once infants become native-language experts. © 2016 John Wiley & Sons Ltd.
Hülper, Petra; Dullin, Christian; Kugler, Wilfried; Lakomek, Max; Erdlenbruch, Bernhard
2011-04-01
The aim of the present study was to gain insight into the penetration, biodistribution, and fate of globulins in the brain after 2-O-hexyldiglycerol-induced blood-brain barrier opening. The spatial distribution of fluorescence probes was investigated after blood-brain barrier opening with intracarotid 2-O-hexyldiglycerol injection. Fluorescence intensity was visualized by microscopy (mice and rats) and by in vivo time-domain optical imaging. There was an increased 2-O-hexyldiglycerol-mediated transfer of fluorescence-labeled globulins into the ipsilateral hemisphere. Sequential in vivo measurements revealed that the increase in protein concentration lasted at least 96 h after administration. Ex vivo detection of tissue fluorescence confirmed the results obtained in vivo. Globulins enter the healthy brain in conjunction with 2-O-hexyldiglycerol. Sequential in vivo near-infrared fluorescence measurements enable the visualization of the spatial distribution of antibodies in the brain of living small animals.
Amon, Wolfgang; White, Robert E; Farrell, Paul J
2006-05-01
Epstein-Barr virus (EBV) establishes a latent persistence from which it can be reactivated to undergo lytic replication. Late lytic-cycle gene expression is linked to lytic DNA replication, as it is sensitive to the same inhibitors that block lytic replication, and it has recently been shown that the viral origin of lytic replication (ori lyt) is required in cis for late-gene expression. During the lytic cycle, the viral genome forms replication compartments, which are usually adjacent to promyelocytic leukaemia protein (PML) nuclear bodies. A tetracycline repressor DNA-binding domain-enhanced green fluorescent protein fusion was used to visualize replicating plasmids carrying a tetracycline operator sequence array. ori lyt mediated the production of plasmid replication compartments that were associated with PML nuclear bodies. Plasmids carrying ori lyt and EBV itself were visualized in the same cells and replicated in similar regions of the nucleus, further supporting the validity of the plasmids for studying late-gene regulation.
Cortical systems mediating visual attention to both objects and spatial locations
Shomstein, Sarah; Behrmann, Marlene
2006-01-01
Natural visual scenes consist of many objects occupying a variety of spatial locations. Given that the plethora of information cannot be processed simultaneously, the multiplicity of inputs compete for representation. Using event-related functional MRI, we show that attention, the mechanism by which a subset of the input is selected, is mediated by the posterior parietal cortex (PPC). Of particular interest is that PPC activity is differentially sensitive to the object-based properties of the input, with enhanced activation for those locations bound by an attended object. Of great interest too is the ensuing modulation of activation in early cortical regions, reflected as differences in the temporal profile of the blood oxygenation level-dependent (BOLD) response for within-object versus between-object locations. These findings indicate that object-based selection results from an object-sensitive reorienting signal issued by the PPC. The dynamic circuit between the PPC and earlier sensory regions then enables observers to attend preferentially to objects of interest in complex scenes. PMID:16840559
Organization of area hV5/MT+ in subjects with homonymous visual field defects.
Papanikolaou, Amalia; Keliris, Georgios A; Papageorgiou, T Dorina; Schiefer, Ulrich; Logothetis, Nikos K; Smirnakis, Stelios M
2018-04-06
Damage to the primary visual cortex (V1) leads to a visual field loss (scotoma) in the retinotopically corresponding part of the visual field. Nonetheless, a small amount of residual visual sensitivity persists within the blind field. This residual capacity has been linked to activity observed in the middle temporal area complex (V5/MT+). However, it remains unknown whether the organization of hV5/MT+ changes following early visual cortical lesions. We studied the organization of area hV5/MT+ of five patients with dense homonymous defects in a quadrant of the visual field as a result of partial V1+ or optic radiation lesions. To do so, we developed a new method, which models the boundaries of population receptive fields directly from the BOLD signal of each voxel in the visual cortex. We found responses in hV5/MT+ arising inside the scotoma for all patients and identified two possible sources of activation: 1) responses might originate from partially lesioned parts of area V1 corresponding to the scotoma, and 2) responses can also originate independent of area V1 input suggesting the existence of functional V1-bypassing pathways. Apparently, visually driven activity observed in hV5/MT+ is not sufficient to mediate conscious vision. More surprisingly, visually driven activity in corresponding regions of V1 and early extrastriate areas including hV5/MT+ did not guarantee visual perception in the group of patients with post-geniculate lesions that we examined. This suggests that the fine coordination of visual activity patterns across visual areas may be an important determinant of whether visual perception persists following visual cortical lesions. Copyright © 2018 Elsevier Inc. All rights reserved.
The effect of linguistic and visual salience in visual world studies.
Cavicchio, Federica; Melcher, David; Poesio, Massimo
2014-01-01
Research using the visual world paradigm has demonstrated that visual input has a rapid effect on language interpretation tasks such as reference resolution and, conversely, that linguistic material-including verbs, prepositions and adjectives-can influence fixations to potential referents. More recent research has started to explore how this effect of linguistic input on fixations is mediated by properties of the visual stimulus, in particular by visual salience. In the present study we further explored the role of salience in the visual world paradigm manipulating language-driven salience and visual salience. Specifically, we tested how linguistic salience (i.e., the greater accessibility of linguistically introduced entities) and visual salience (bottom-up attention grabbing visual aspects) interact. We recorded participants' eye-movements during a MapTask, asking them to look from landmark to landmark displayed upon a map while hearing direction-giving instructions. The landmarks were of comparable size and color, except in the Visual Salience condition, in which one landmark had been made more visually salient. In the Linguistic Salience conditions, the instructions included references to an object not on the map. Response times and fixations were recorded. Visual Salience influenced the time course of fixations at both the beginning and the end of the trial but did not show a significant effect on response times. Linguistic Salience reduced response times and increased fixations to landmarks when they were associated to a Linguistic Salient entity not present itself on the map. When the target landmark was both visually and linguistically salient, it was fixated longer, but fixations were quicker when the target item was linguistically salient only. Our results suggest that the two types of salience work in parallel and that linguistic salience affects fixations even when the entity is not visually present.
Martín-Loeches, M; Hinojosa, J A; Rubia, F J
1999-11-01
The temporal and hierarchical relationships between the dorsal and the ventral streams in selective attention are known only in relation to the use of spatial location as the attentional cue mediated by the dorsal stream. To improve this state of affairs, event-related brain potentials were recorded while subjects attended simultaneously to motion direction (mediated by the dorsal stream) and to a property mediated by the ventral stream (color or shape). At about the same time, a selection positivity (SP) started for attention mediated by both streams. However, the SP for color and shape peaked about 60 ms later than motion SP. Subsequently, a selection negativity (SN) followed by a late positive component (LPC) were found simultaneously for attention mediated by both streams. A hierarchical relationship between the two streams was not observed, but neither SN nor LPC for one property was completely insensitive to the values of the other property.
Melo, Rossana C N; Weller, Peter F
2016-10-01
Electron microscopy (EM)-based techniques are mostly responsible for our current view of cell morphology at the subcellular level and continue to play an essential role in biological research. In cells from the immune system, such as eosinophils, EM has helped to understand how cells package and release mediators involved in immune responses. Ultrastructural investigations of human eosinophils enabled visualization of secretory processes in detail and identification of a robust, vesicular trafficking essential for the secretion of immune mediators via a non-classical secretory pathway associated with secretory (specific) granules. This vesicular system is mainly organized as large tubular-vesicular carriers (Eosinophil Sombrero Vesicles - EoSVs) actively formed in response to cell activation and provides a sophisticated structural mechanism for delivery of granule-stored mediators. In this review, we highlight the application of EM techniques to recognize pools of immune mediators at vesicular compartments and to understand the complex secretory pathway within human eosinophils involved in inflammatory and allergic responses. Copyright © 2016 Elsevier Inc. All rights reserved.
Xu, Yang; D'Lauro, Christopher; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.
2013-01-01
Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning. PMID:24146656
Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.
White, Claire; Brown, John; Edwards, Mark
2014-07-01
Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.
Neural correlates of individual performance differences in resolving perceptual conflict.
Labrenz, Franziska; Themann, Maria; Wascher, Edmund; Beste, Christian; Pfleiderer, Bettina
2012-01-01
Attentional mechanisms are a crucial prerequisite to organize behavior. Most situations may be characterized by a 'competition' between salient, but irrelevant stimuli and less salient, relevant stimuli. In such situations top-down and bottom-up mechanisms interact with each other. In the present fMRI study, we examined how interindividual differences in resolving situations of perceptual conflict are reflected in brain networks mediating attentional selection. Doing so, we employed a change detection task in which subjects had to detect luminance changes in the presence and absence of competing distractors. The results show that good performers presented increased activation in the orbitofrontal cortex (BA 11), anterior cingulate (BA 25), inferior parietal lobule (BA 40) and visual areas V2 and V3 but decreased activation in BA 39. This suggests that areas mediating top-down attentional control are stronger activated in this group. Increased activity in visual areas reflects distinct neuronal enhancement relating to selective attentional mechanisms in order to solve the perceptual conflict. Opposed to good performers, brain areas activated by poor performers comprised the left inferior parietal lobule (BA 39) and fronto-parietal and visual regions were continuously deactivated, suggesting that poor performers perceive stronger conflict than good performers. Moreover, the suppression of neural activation in visual areas might indicate a strategy of poor performers to inhibit the processing of the irrelevant non-target feature. These results indicate that high sensitivity in perceptual areas and increased attentional control led to less conflict in stimulus processing and consequently to higher performance in competitive attentional selection.
Chechlacz, Magdalena; Gillebert, Celine R; Vangkilde, Signe A; Petersen, Anders; Humphreys, Glyn W
2015-07-29
Visuospatial attention allows us to select and act upon a subset of behaviorally relevant visual stimuli while ignoring distraction. Bundesen's theory of visual attention (TVA) (Bundesen, 1990) offers a quantitative analysis of the different facets of attention within a unitary model and provides a powerful analytic framework for understanding individual differences in attentional functions. Visuospatial attention is contingent upon large networks, distributed across both hemispheres, consisting of several cortical areas interconnected by long-association frontoparietal pathways, including three branches of the superior longitudinal fasciculus (SLF I-III) and the inferior fronto-occipital fasciculus (IFOF). Here we examine whether structural variability within human frontoparietal networks mediates differences in attention abilities as assessed by the TVA. Structural measures were based on spherical deconvolution and tractography-derived indices of tract volume and hindrance-modulated orientational anisotropy (HMOA). Individual differences in visual short-term memory (VSTM) were linked to variability in the microstructure (HMOA) of SLF II, SLF III, and IFOF within the right hemisphere. Moreover, VSTM and speed of information processing were linked to hemispheric lateralization within the IFOF. Differences in spatial bias were mediated by both variability in microstructure and volume of the right SLF II. Our data indicate that the microstructural and macrostrucutral organization of white matter pathways differentially contributes to both the anatomical lateralization of frontoparietal attentional networks and to individual differences in attentional functions. We conclude that individual differences in VSTM capacity, processing speed, and spatial bias, as assessed by TVA, link to variability in structural organization within frontoparietal pathways. Copyright © 2015 Chechlacz et al.
T, Sathish Kumar; A, Navaneeth Krishnan; J, Joseph Sahaya Rajan; M, Makesh; K P, Jithendran; S V, Alavandi; K K, Vijayan
2018-05-01
The emerging microsporidian parasite Enterocytozoon hepatopenaei (EHP), the causative agent of hepatopancreatic microsporidiosis, has been widely reported in shrimp-farming countries. EHP infection can be detected by light microscopy observation of spores (1.7 × 1 μm) in stained hepatopancreas (HP) tissue smears, HP tissue sections, and fecal samples. EHP can also be detected by polymerase chain reaction (PCR) targeting the small subunit (SSU) ribosomal RNA (rRNA) gene or the spore wall protein gene (SWP). In this study, a rapid, sensitive, specific, and closed tube visual loop-mediated isothermal amplification (LAMP) protocol combined with FTA cards was developed for the diagnosis of EHP. LAMP primers were designed based on the SSU rRNA gene of EHP. The target sequence of EHP was amplified at constant temperature of 65 °C for 45 min and amplified LAMP products were visually detected in a closed tube system by using SYBR™ green I dye. Detection limit of this LAMP protocol was ten copies. Field and clinical applicability of this assay was evaluated using 162 field samples including 106 HP tissue samples and 56 fecal samples collected from shrimp farms. Out of 162 samples, EHP could be detected in 62 samples (47 HP samples and 15 fecal samples). When compared with SWP-PCR as the gold standard, this EHP LAMP assay had 95.31% sensitivity, 98.98% specificity, and a kappa value of 0.948. This simple, closed tube, clinically evaluated visual LAMP assay has great potential for diagnosing EHP at the farm level, particularly under low-resource circumstances.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.
Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-07-28
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera
Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-01-01
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982
The multiple sclerosis visual pathway cohort: understanding neurodegeneration in MS.
Martínez-Lapiscina, Elena H; Fraga-Pumar, Elena; Gabilondo, Iñigo; Martínez-Heras, Eloy; Torres-Torres, Ruben; Ortiz-Pérez, Santiago; Llufriu, Sara; Tercero, Ana; Andorra, Magi; Roca, Marc Figueras; Lampert, Erika; Zubizarreta, Irati; Saiz, Albert; Sanchez-Dalmau, Bernardo; Villoslada, Pablo
2014-12-15
Multiple Sclerosis (MS) is an immune-mediated disease of the Central Nervous System with two major underlying etiopathogenic processes: inflammation and neurodegeneration. The latter determines the prognosis of this disease. MS is the main cause of non-traumatic disability in middle-aged populations. The MS-VisualPath Cohort was set up to study the neurodegenerative component of MS using advanced imaging techniques by focusing on analysis of the visual pathway in a middle-aged MS population in Barcelona, Spain. We started the recruitment of patients in the early phase of MS in 2010 and it remains permanently open. All patients undergo a complete neurological and ophthalmological examination including measurements of physical and disability (Expanded Disability Status Scale; Multiple Sclerosis Functional Composite and neuropsychological tests), disease activity (relapses) and visual function testing (visual acuity, color vision and visual field). The MS-VisualPath protocol also assesses the presence of anxiety and depressive symptoms (Hospital Anxiety and Depression Scale), general quality of life (SF-36) and visual quality of life (25-Item National Eye Institute Visual Function Questionnaire with the 10-Item Neuro-Ophthalmic Supplement). In addition, the imaging protocol includes both retinal (Optical Coherence Tomography and Wide-Field Fundus Imaging) and brain imaging (Magnetic Resonance Imaging). Finally, multifocal Visual Evoked Potentials are used to perform neurophysiological assessment of the visual pathway. The analysis of the visual pathway with advance imaging and electrophysilogical tools in parallel with clinical information will provide significant and new knowledge regarding neurodegeneration in MS and provide new clinical and imaging biomarkers to help monitor disease progression in these patients.
Functional metabolomics as a tool to analyze Mediator function and structure in plants.
Davoine, Celine; Abreu, Ilka N; Khajeh, Khalil; Blomberg, Jeanette; Kidd, Brendan N; Kazan, Kemal; Schenk, Peer M; Gerber, Lorenz; Nilsson, Ove; Moritz, Thomas; Björklund, Stefan
2017-01-01
Mediator is a multiprotein transcriptional co-regulator complex composed of four modules; Head, Middle, Tail, and Kinase. It conveys signals from promoter-bound transcriptional regulators to RNA polymerase II and thus plays an essential role in eukaryotic gene regulation. We describe subunit localization and activities of Mediator in Arabidopsis through metabolome and transcriptome analyses from a set of Mediator mutants. Functional metabolomic analysis based on the metabolite profiles of Mediator mutants using multivariate statistical analysis and heat-map visualization shows that different subunit mutants display distinct metabolite profiles, which cluster according to the reported localization of the corresponding subunits in yeast. Based on these results, we suggest localization of previously unassigned plant Mediator subunits to specific modules. We also describe novel roles for individual subunits in development, and demonstrate changes in gene expression patterns and specific metabolite levels in med18 and med25, which can explain their phenotypes. We find that med18 displays levels of phytoalexins normally found in wild type plants only after exposure to pathogens. Our results indicate that different Mediator subunits are involved in specific signaling pathways that control developmental processes and tolerance to pathogen infections.
Media and Literacy: What's Good?
ERIC Educational Resources Information Center
Newkirk, Thomas
2006-01-01
For schools to effectively teach literacy, they should work with, not against, the cultural tools that students bring to school. Outside school, students' lives are immersed in visually mediated narratives. By tapping into the cultural, artistic, and linguistic resources of popular culture and multimedia, teachers can create more willing readers…
Academic Literacies: The Word Is Not Enough
ERIC Educational Resources Information Center
Richards, Kendall; Pilcher, Nick
2018-01-01
For Academic Literacies, the world is textually mediated; written texts and what informs them reveal elements such as subject-discipline practices. Furthermore, multi-modalities, for example, visual representation, inform written text, and multiple methods of inquiry, including interviews, shed light on written text production. In this article we…
NASA Technical Reports Server (NTRS)
Mulavara, Ajitkumar; Ruttley, Tara; Cohen, Helen; Peters, Brian; Miller, Chris; Brady, Rachel; Merkle, Lauren; Bloomberg, Jacob
2010-01-01
Exposure to the microgravity conditions of space flight induces adaptive modification in the control of vestibular-mediated reflexive head movement during locomotion after space flight. Space flight causes astronauts to be exposed to somatosensory adaptation in both the vestibular and body load-sensing (BLS) systems. The goal of these studies was to examine the contributions of vestibular and BLS-mediated somatosensory influences on head movement control during locomotion after long-duration space flight. Subjects were asked to walk on a treadmill driven at 1.8 m/s while performing a visual acuity task. Data were collected using the same testing protocol from three independent subject groups; 1) normal subjects before and after exposure to 30 minutes of 40% bodyweight unloaded treadmill walking, 2) bilateral labyrinthine deficient (LD) patients and 3) astronauts who performed the protocol before and after long duration space flight. Motion data from head and trunk segmental motion data were obtained to calculate the angular head pitch (HP) movements during walking trials while subjects performed the visual task, to estimate the contributions of vestibular reflexive mechanisms in HP movements. Results showed that exposure to unloaded locomotion caused a significant increase in HP movements, whereas in the LD patients the HP movements were significantly decreased. Astronaut subjects results showed a heterogeneous response of both increases and decreases in the amplitude of HP movement. We infer that BLS-mediated somatosensory input centrally modulates vestibular input and can adaptively modify head-movement control during locomotion. Thus, space flight may cause a central adaptation mediated by the converging vestibular and body load-sensing somatosensory systems.
Visual Place Learning in Drosophila melanogaster
Ofstad, Tyler A.; Zuker, Charles S.; Reiser, Michael B.
2011-01-01
The ability of insects to learn and navigate to specific locations in the environment has fascinated naturalists for decades. While the impressive navigation abilities of ants, bees, wasps, and other insects clearly demonstrate that insects are capable of visual place learning1–4, little is known about the underlying neural circuits that mediate these behaviors. Drosophila melanogaster is a powerful model organism for dissecting the neural circuitry underlying complex behaviors, from sensory perception to learning and memory. Flies can identify and remember visual features such as size, color, and contour orientation5, 6. However, the extent to which they use vision to recall specific locations remains unclear. Here we describe a visual place-learning platform and demonstrate that Drosophila are capable of forming and retaining visual place memories to guide selective navigation. By targeted genetic silencing of small subsets of cells in the Drosophila brain we show that neurons in the ellipsoid body, but not in the mushroom bodies, are necessary for visual place learning. Together, these studies reveal distinct neuroanatomical substrates for spatial versus non-spatial learning, and substantiate Drosophila as a powerful model for the study of spatial memories. PMID:21654803
Otsuna, Hideo; Shinomiya, Kazunori; Ito, Kei
2014-01-01
Compared with connections between the retinae and primary visual centers, relatively less is known in both mammals and insects about the functional segregation of neural pathways connecting primary and higher centers of the visual processing cascade. Here, using the Drosophila visual system as a model, we demonstrate two levels of parallel computation in the pathways that connect primary visual centers of the optic lobe to computational circuits embedded within deeper centers in the central brain. We show that a seemingly simple achromatic behavior, namely phototaxis, is under the control of several independent pathways, each of which is responsible for navigation towards unique wavelengths. Silencing just one pathway is enough to disturb phototaxis towards one characteristic monochromatic source, whereas phototactic behavior towards white light is not affected. The response spectrum of each demonstrable pathway is different from that of individual photoreceptors, suggesting subtractive computations. A choice assay between two colors showed that these pathways are responsible for navigation towards, but not for the detection itself of, the monochromatic light. The present study provides novel insights about how visual information is separated and processed in parallel to achieve robust control of an innate behavior. PMID:24574974
Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray
2014-11-01
The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.
Eye-Catching Odors: Olfaction Elicits Sustained Gazing to Faces and Eyes in 4-Month-Old Infants
Lewkowicz, David J.; Goubet, Nathalie; Schaal, Benoist
2013-01-01
This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues. PMID:24015175
Effects of complete monocular deprivation in visuo-spatial memory.
Cattaneo, Zaira; Merabet, Lotfi B; Bhatt, Ela; Vecchi, Tomaso
2008-09-30
Monocular deprivation has been associated with both specific deficits and enhancements in visual perception and processing. In this study, performance on a visuo-spatial memory task was compared in congenitally monocular individuals and sighted control individuals viewing monocularly (i.e., patched) and binocularly. The task required the individuals to view and memorize a series of target locations on two-dimensional matrices. Overall, congenitally monocular individuals performed worse than sighted individuals (with a specific deficit in simultaneously maintaining distinct spatial representations in memory), indicating that the lack of binocular visual experience affects the way visual information is represented in visuo-spatial memory. No difference was observed between the monocular and binocular viewing control groups, suggesting that early monocular deprivation affects the development of cortical mechanisms mediating visuo-spatial cognition.
Matching optical flow to motor speed in virtual reality while running on a treadmill
Lafortuna, Claudio L.; Mugellini, Elena; Abou Khaled, Omar
2018-01-01
We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed–i.e., treadmill’s speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care. PMID:29641564
Effects of attention and laterality on motion and orientation discrimination in deaf signers.
Bosworth, Rain G; Petrich, Jennifer A F; Dobkins, Karen R
2013-06-01
Previous studies have asked whether visual sensitivity and attentional processing in deaf signers are enhanced or altered as a result of their different sensory experiences during development, i.e., auditory deprivation and exposure to a visual language. In particular, deaf and hearing signers have been shown to exhibit a right visual field/left hemisphere advantage for motion processing, while hearing nonsigners do not. To examine whether this finding extends to other aspects of visual processing, we compared deaf signers and hearing nonsigners on motion, form, and brightness discrimination tasks. Secondly, to examine whether hemispheric lateralities are affected by attention, we employed a dual-task paradigm to measure form and motion thresholds under "full" vs. "poor" attention conditions. Deaf signers, but not hearing nonsigners, exhibited a right visual field advantage for motion processing. This effect was also seen for form processing and not for the brightness task. Moreover, no group differences were observed in attentional effects, and the motion and form visual field asymmetries were not modulated by attention, suggesting they occur at early levels of sensory processing. In sum, the results show that processing of motion and form, believed to be mediated by dorsal and ventral visual pathways, respectively, are left-hemisphere dominant in deaf signers. Published by Elsevier Inc.
Matching optical flow to motor speed in virtual reality while running on a treadmill.
Caramenti, Martina; Lafortuna, Claudio L; Mugellini, Elena; Abou Khaled, Omar; Bresciani, Jean-Pierre; Dubois, Amandine
2018-01-01
We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed-i.e., treadmill's speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care.
Kahn, Itamar; Wig, Gagan S.; Schacter, Daniel L.
2012-01-01
Asymmetrical specialization of cognitive processes across the cerebral hemispheres is a hallmark of healthy brain development and an important evolutionary trait underlying higher cognition in humans. While previous research, including studies of priming, divided visual field presentation, and split-brain patients, demonstrates a general pattern of right/left asymmetry of form-specific versus form-abstract visual processing, little is known about brain organization underlying this dissociation. Here, using repetition priming of complex visual scenes and high-resolution functional magnetic resonance imaging (MRI), we demonstrate asymmetrical form specificity of visual processing between the right and left hemispheres within a region known to be critical for processing of visual spatial scenes (parahippocampal place area [PPA]). Next, we use resting-state functional connectivity MRI analyses to demonstrate that this functional asymmetry is associated with differential intrinsic activity correlations of the right versus left PPA with regions critically involved in perceptual versus conceptual processing, respectively. Our results demonstrate that the PPA comprises lateralized subregions across the cerebral hemispheres that are engaged in functionally dissociable yet complementary components of visual scene analysis. Furthermore, this functional asymmetry is associated with differential intrinsic functional connectivity of the PPA with distinct brain areas known to mediate dissociable cognitive processes. PMID:21968568
Nielsen, Simon; Wilms, L Inge
2014-01-01
We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.
Retinal and visual system: occupational and environmental toxicology.
Fox, Donald A
2015-01-01
Occupational chemical exposure often results in sensory systems alterations that occur without other clinical signs or symptoms. Approximately 3000 chemicals are toxic to the retina and central visual system. Their dysfunction can have immediate, long-term, and delayed effects on mental health, physical health, and performance and lead to increased occupational injuries. The aims of this chapter are fourfold. First, provide references on retinal/visual system structure, function, and assessment techniques. Second, discuss the retinal features that make it especially vulnerable to toxic chemicals. Third, review the clinical and corresponding experimental data regarding retinal/visual system deficits produced by occupational toxicants: organic solvents (carbon disulfide, trichloroethylene, tetrachloroethylene, styrene, toluene, and mixtures) and metals (inorganic lead, methyl mercury, and mercury vapor). Fourth, discuss occupational and environmental toxicants as risk factors for late-onset retinal diseases and degeneration. Overall, the toxicants altered color vision, rod- and/or cone-mediated electroretinograms, visual fields, spatial contrast sensitivity, and/or retinal thickness. The findings elucidate the importance of conducting multimodal noninvasive clinical, electrophysiologic, imaging and vision testing to monitor toxicant-exposed workers for possible retinal/visual system alterations. Finally, since the retina is a window into the brain, an increased awareness and understanding of retinal/visual system dysfunction should provide additional insight into acquired neurodegenerative disorders. © 2015 Elsevier B.V. All rights reserved.
Stevens, W Dale; Kahn, Itamar; Wig, Gagan S; Schacter, Daniel L
2012-08-01
Asymmetrical specialization of cognitive processes across the cerebral hemispheres is a hallmark of healthy brain development and an important evolutionary trait underlying higher cognition in humans. While previous research, including studies of priming, divided visual field presentation, and split-brain patients, demonstrates a general pattern of right/left asymmetry of form-specific versus form-abstract visual processing, little is known about brain organization underlying this dissociation. Here, using repetition priming of complex visual scenes and high-resolution functional magnetic resonance imaging (MRI), we demonstrate asymmetrical form specificity of visual processing between the right and left hemispheres within a region known to be critical for processing of visual spatial scenes (parahippocampal place area [PPA]). Next, we use resting-state functional connectivity MRI analyses to demonstrate that this functional asymmetry is associated with differential intrinsic activity correlations of the right versus left PPA with regions critically involved in perceptual versus conceptual processing, respectively. Our results demonstrate that the PPA comprises lateralized subregions across the cerebral hemispheres that are engaged in functionally dissociable yet complementary components of visual scene analysis. Furthermore, this functional asymmetry is associated with differential intrinsic functional connectivity of the PPA with distinct brain areas known to mediate dissociable cognitive processes.
Kavcic, Voyko; Triplett, Regina L.; Das, Anasuya; Martin, Tim; Huxlin, Krystel R.
2015-01-01
Partial cortical blindness is a visual deficit caused by unilateral damage to the primary visual cortex, a condition previously considered beyond hopes of rehabilitation. However, recent data demonstrate that patients may recover both simple and global motion discrimination following intensive training in their blind field. The present experiments characterized motion-induced neural activity of cortically blind (CB) subjects prior to the onset of visual rehabilitation. This was done to provide information about visual processing capabilities available to mediate training-induced visual improvements. Visual Evoked Potentials (VEPs) were recorded from two experimental groups consisting of 9 CB subjects and 9 age-matched, visually-intact controls. VEPs were collected following lateralized stimulus presentation to each of the 4 visual field quadrants. VEP waveforms were examined for both stimulus-onset (SO) and motion-onset (MO) related components in postero-lateral electrodes. While stimulus presentation to intact regions of the visual field elicited normal SO-P1, SO-N1, SO-P2 and MO-N2 amplitudes and latencies in contralateral brain regions of CB subjects, these components were not observed contralateral to stimulus presentation in blind quadrants of the visual field. In damaged brain hemispheres, SO-VEPs were only recorded following stimulus presentation to intact visual field quadrants, via inter-hemispheric transfer. MO-VEPs were only recorded from damaged left brain hemispheres, possibly reflecting a native left/right asymmetry in inter-hemispheric connections. The present findings suggest that damaged brain hemispheres contain areas capable of responding to visual stimulation. However, in the absence of training or rehabilitation, these areas only generate detectable VEPs in response to stimulation of the intact hemifield of vision. PMID:25575450
Effect of pattern complexity on the visual span for Chinese and alphabet characters
Wang, Hui; He, Xuanzi; Legge, Gordon E.
2014-01-01
The visual span for reading is the number of letters that can be recognized without moving the eyes and is hypothesized to impose a sensory limitation on reading speed. Factors affecting the size of the visual span have been studied using alphabet letters. There may be common constraints applying to recognition of other scripts. The aim of this study was to extend the concept of the visual span to Chinese characters and to examine the effect of the greater complexity of these characters. We measured visual spans for Chinese characters and alphabet letters in the central vision of bilingual subjects. Perimetric complexity was used as a metric to quantify the pattern complexity of binary character images. The visual span tests were conducted with four sets of stimuli differing in complexity—lowercase alphabet letters and three groups of Chinese characters. We found that the size of visual spans decreased with increasing complexity, ranging from 10.5 characters for alphabet letters to 4.5 characters for the most complex Chinese characters studied. A decomposition analysis revealed that crowding was the dominant factor limiting the size of the visual span, and the amount of crowding increased with complexity. Errors in the spatial arrangement of characters (mislocations) had a secondary effect. We conclude that pattern complexity has a major effect on the size of the visual span, mediated in large part by crowding. Measuring the visual span for Chinese characters is likely to have high relevance to understanding visual constraints on Chinese reading performance. PMID:24993020
Inhibition of Return in the Visual Field
Bao, Yan; Lei, Quan; Fang, Yuan; Tong, Yu; Schill, Kerstin; Pöppel, Ernst; Strasburger, Hans
2013-01-01
Inhibition of return (IOR) as an indicator of attentional control is characterized by an eccentricity effect, that is, the more peripheral visual field shows a stronger IOR magnitude relative to the perifoveal visual field. However, it could be argued that this eccentricity effect may not be an attention effect, but due to cortical magnification. To test this possibility, we examined this eccentricity effect in two conditions: the same-size condition in which identical stimuli were used at different eccentricities, and the size-scaling condition in which stimuli were scaled according to the cortical magnification factor (M-scaling), thus stimuli being larger at the more peripheral locations. The results showed that the magnitude of IOR was significantly stronger in the peripheral relative to the perifoveal visual field, and this eccentricity effect was independent of the manipulation of stimulus size (same-size or size-scaling). These results suggest a robust eccentricity effect of IOR which cannot be eliminated by M-scaling. Underlying neural mechanisms of the eccentricity effect of IOR are discussed with respect to both cortical and subcortical structures mediating attentional control in the perifoveal and peripheral visual field. PMID:23820946
A neural measure of precision in visual working memory.
Ester, Edward F; Anderson, David E; Serences, John T; Awh, Edward
2013-05-01
Recent studies suggest that the temporary storage of visual detail in working memory is mediated by sensory recruitment or sustained patterns of stimulus-specific activation within feature-selective regions of visual cortex. According to a strong version of this hypothesis, the relative "quality" of these patterns should determine the clarity of an individual's memory. Here, we provide a direct test of this claim. We used fMRI and a forward encoding model to characterize population-level orientation-selective responses in visual cortex while human participants held an oriented grating in memory. This analysis, which enables a precise quantitative description of multivoxel, population-level activity measured during working memory storage, revealed graded response profiles whose amplitudes were greatest for the remembered orientation and fell monotonically as the angular distance from this orientation increased. Moreover, interparticipant differences in the dispersion-but not the amplitude-of these response profiles were strongly correlated with performance on a concurrent memory recall task. These findings provide important new evidence linking the precision of sustained population-level responses in visual cortex and memory acuity.
Dynamic binding of visual features by neuronal/stimulus synchrony.
Iwabuchi, A
1998-05-01
When people see a visual scene, certain parts of the visual scene are treated as belonging together and we regard them as a perceptual unit, which is called a "figure". People focus on figures, and the remaining parts of the scene are disregarded as "ground". In Gestalt psychology this process is called "figure-ground segregation". According to current perceptual psychology, a figure is formed by binding various visual features in a scene, and developments in neuroscience have revealed that there are many feature-encoding neurons, which respond to such features specifically. It is not known, however, how the brain binds different features of an object into a coherent visual object representation. Recently, the theory of binding by neuronal synchrony, which argues that feature binding is dynamically mediated by neuronal synchrony of feature-encoding neurons, has been proposed. This review article portrays the problem of figure-ground segregation and features binding, summarizes neurophysiological and psychophysical experiments and theory relevant to feature binding by neuronal/stimulus synchrony, and suggests possible directions for future research on this topic.
Shibai, Atsushi; Arimoto, Tsunehiro; Yoshinaga, Tsukasa; Tsuchizawa, Yuta; Khureltulga, Dashdavaa; Brown, Zuben P; Kakizuka, Taishi; Hosoda, Kazufumi
2018-06-05
Visual recognition of conspecifics is necessary for a wide range of social behaviours in many animals. Medaka (Japanese rice fish), a commonly used model organism, are known to be attracted by the biological motion of conspecifics. However, biological motion is a composite of both body-shape motion and entire-field motion trajectory (i.e., posture or motion-trajectory elements, respectively), and it has not been revealed which element mediates the attractiveness. Here, we show that either posture or motion-trajectory elements alone can attract medaka. We decomposed biological motion of the medaka into the two elements and synthesized visual stimuli that contain both, either, or none of the two elements. We found that medaka were attracted by visual stimuli that contain at least one of the two elements. In the context of other known static visual information regarding the medaka, the potential multiplicity of information regarding conspecific recognition has further accumulated. Our strategy of decomposing biological motion into these partial elements is applicable to other animals, and further studies using this technique will enhance the basic understanding of visual recognition of conspecifics.
ERIC Educational Resources Information Center
Bente, Gary; Ruggenberg, Sabine; Kramer, Nicole C.; Eschenburg, Felix
2008-01-01
This study analyzes the influence of avatars on social presence, interpersonal trust, perceived communication quality, nonverbal behavior, and visual attention in Net-based collaborations using a comparative approach. A real-time communication window including a special avatar interface was integrated into a shared collaborative workspace.…
A Contextual View of Adult Learning and Memory.
ERIC Educational Resources Information Center
Glynn, Shawn M.
Explanations of age-related differences in adult memory usually assume two forms: processing deficits and structural deficits. Processing deficit explanations attribute recall differences to a failure of older adults to effectively use the processes of attention, organization, mediation (the use of such devices as visual images and verbal images…
Visualizing estrogen receptor-a-expressing neurons using a new ERa-ZsGreen reporter mouse line
USDA-ARS?s Scientific Manuscript database
A variety of biological functions of estrogens, including regulation of energy metabolism, are mediated by neurons expressingestrogen receptor-a (ERa) in the brain. However, complex intracellular processes in these ERa-expressing neurons are difficult to unravel, due to the lack of strategy to visua...
Oral Conversations Online: Redefining Oral Competence in Synchronous Environments
ERIC Educational Resources Information Center
Lamy, Marie-Noelle
2004-01-01
In this article the focus is on methodology for analysing learner-learner oral conversations mediated by computers. With the increasing availability of synchronous voice-based groupware and the additional facilities offered by audio-graphic tools, language learners have opportunities for collaborating on oral tasks, supported by visual and textual…
Utilizing Multi-Modal Literacies in Middle Grades Science
ERIC Educational Resources Information Center
Saurino, Dan; Ogletree, Tamra; Saurino, Penelope
2010-01-01
The nature of literacy is changing. Increased student use of computer-mediated, digital, and visual communication spans our understanding of adolescent multi-modal capabilities that reach beyond the traditional conventions of linear speech and written text in the science curriculum. Advancing technology opens doors to learning that involve…
Brain Network Interactions in Auditory, Visual and Linguistic Processing
ERIC Educational Resources Information Center
Horwitz, Barry; Braun, Allen R.
2004-01-01
In the paper, we discuss the importance of network interactions between brain regions in mediating performance of sensorimotor and cognitive tasks, including those associated with language processing. Functional neuroimaging, especially PET and fMRI, provide data that are obtained essentially simultaneously from much of the brain, and thus are…
Working Memory Components and Problem-Solving Accuracy: Are There Multiple Pathways?
ERIC Educational Resources Information Center
Swanson, H. Lee; Fung, Wenson
2016-01-01
This study determined the working memory (WM) components (executive, phonological short-term memory [STM], and visual-spatial sketchpad) that best predicted mathematical word problem-solving accuracy in elementary schoolchildren (N = 392). The battery of tests administered to assess mediators between WM and problem-solving included measures of…
Internet Use during Childhood and the Ecological Techno-Subsystem
ERIC Educational Resources Information Center
Johnson, Genevieve Marie; Puplampu, Korbla P.
2008-01-01
Research findings suggest both positive and negative developmental consequences of Internet use during childhood (e.g., playing video games have been associated with enhanced visual skills as well as increased aggression). Several studies have concluded that environmental factors mediate the developmental impact of childhood online behaviour. From…
ERIC Educational Resources Information Center
Bagga-Gupta, Sangeeta
2010-01-01
This article brings together salient findings regarding communication and identity through studies of everyday social practices, studies of discourses about these practices and policy documents pertaining to special schools from "previous" and "ongoing" ethnographic projects based at the KKOM-DS (Communication, Culture and…
Chahar, Madhvi; Anvikar, Anup; Dixit, Rajnikant; Valecha, Neena
2018-07-01
Loop mediated isothermal amplification (LAMP) assay is sensitive, prompt, high throughput and field deployable technique for nucleic acid amplification under isothermal conditions. In this study, we have developed and optimized four different visualization methods of loop-mediated isothermal amplification (LAMP) assay to detect Pfcrt K76T mutants of P. falciparum and compared their important features for one-pot in-field applications. Even though all the four tested LAMP methods could successfully detect K76T mutants of P. falciparum, however considering the time, safety, sensitivity, cost and simplicity, the malachite green and HNB based methods were found more efficient. Among four different visual dyes uses to detect LAMP products accurately, hydroxynaphthol blue and malachite green could produce long stable color change and brightness in a close tube-based approach to prevent cross-contamination risk. Our results indicated that the LAMP offers an interesting novel and convenient best method for the rapid, sensitive, cost-effective, and fairly user friendly tool for detection of K76T mutants of P. falciparum and therefore presents an alternative to PCR-based assays. Based on our comparative analysis, better field based LAMP visualization method can be chosen easily for the monitoring of other important drug targets (Kelch13 propeller region). Copyright © 2018 Elsevier Inc. All rights reserved.
Karas, Vlad O; Westerlaken, Ilja; Meyer, Anne S
2013-05-31
Oxidative stress is an unavoidable byproduct of aerobic life. Molecular oxygen is essential for terrestrial metabolism, but it also takes part in many damaging reactions within living organisms. The combination of aerobic metabolism and iron, which is another vital compound for life, is enough to produce radicals through Fenton chemistry and degrade cellular components. DNA degradation is arguably the most damaging process involving intracellular radicals, as DNA repair is far from trivial. The assay presented in this article offers a quantitative technique to measure and visualize the effect of molecules and enzymes on radical-mediated DNA damage. The DNA protection assay is a simple, quick, and robust tool for the in vitro characterization of the protective properties of proteins or chemicals. It involves exposing DNA to a damaging oxidative reaction and adding varying concentrations of the compound of interest. The reduction or increase of DNA damage as a function of compound concentration is then visualized using gel electrophoresis. In this article we demonstrate the technique of the DNA protection assay by measuring the protective properties of the DNA-binding protein from starved cells (Dps). Dps is a mini-ferritin that is utilized by more than 300 bacterial species to powerfully combat environmental stressors. Here we present the Dps purification protocol and the optimized assay conditions for evaluating DNA protection by Dps.
PATTERNS OF CLINICALLY SIGNIFICANT COGNITIVE IMPAIRMENT IN HOARDING DISORDER.
Mackin, R Scott; Vigil, Ofilio; Insel, Philip; Kivowitz, Alana; Kupferman, Eve; Hough, Christina M; Fekri, Shiva; Crothers, Ross; Bickford, David; Delucchi, Kevin L; Mathews, Carol A
2016-03-01
The cognitive characteristics of individuals with hoarding disorder (HD) are not well understood. Existing studies are relatively few and somewhat inconsistent but suggest that individuals with HD may have specific dysfunction in the cognitive domains of categorization, speed of information processing, and decision making. However, there have been no studies evaluating the degree to which cognitive dysfunction in these domains reflects clinically significant cognitive impairment (CI). Participants included 78 individuals who met DSM-V criteria for HD and 70 age- and education-matched controls. Cognitive performance on measures of memory, attention, information processing speed, abstract reasoning, visuospatial processing, decision making, and categorization ability was evaluated for each participant. Rates of clinical impairment for each measure were compared, as were age- and education-corrected raw scores for each cognitive test. HD participants showed greater incidence of CI on measures of visual memory, visual detection, and visual categorization relative to controls. Raw-score comparisons between groups showed similar results with HD participants showing lower raw-score performance on each of these measures. In addition, in raw-score comparisons HD participants also demonstrated relative strengths compared to control participants on measures of verbal and visual abstract reasoning. These results suggest that HD is associated with a pattern of clinically significant CI in some visually mediated neurocognitive processes including visual memory, visual detection, and visual categorization. Additionally, these results suggest HD individuals may also exhibit relative strengths, perhaps compensatory, in abstract reasoning in both verbal and visual domains. © 2015 Wiley Periodicals, Inc.
A novel RPE65 inhibitor CU239 suppresses visual cycle and prevents retinal degeneration.
Shin, Younghwa; Moiseyev, Gennadiy; Petrukhin, Konstantin; Cioffi, Christopher L; Muthuraman, Parthasarathy; Takahashi, Yusuke; Ma, Jian-Xing
2018-07-01
The retinoid visual cycle is an ocular retinoid metabolism specifically dedicated to support vertebrate vision. The visual cycle serves not only to generate light-sensitive visual chromophore 11-cis-retinal, but also to clear toxic byproducts of normal visual cycle (i.e. all-trans-retinal and its condensation products) from the retina, ensuring both the visual function and the retinal health. Unfortunately, various conditions including genetic predisposition, environment and aging may attribute to a functional decline of the all-trans-retinal clearance. To combat all-trans-retinal mediated retinal degeneration, we sought to slow down the retinoid influx from the RPE by inhibiting the visual cycle with a small molecule. The present study describes identification of CU239, a novel non-retinoid inhibitor of RPE65, a key enzyme in the visual cycle. Our data demonstrated that CU239 selectively inhibited isomerase activity of RPE65, with IC 50 of 6 μM. Further, our results indicated that CU239 inhibited RPE65 via competition with its substrate all-trans-retinyl ester. Mice with systemic injection of CU239 exhibited delayed chromophore regeneration after light bleach, and conferred a partial protection of the retina against injury from high intensity light. Taken together, CU239 is a potent visual cycle modulator and may have a therapeutic potential for retinal degeneration. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.
Kim, Aram; Kretch, Kari S; Zhou, Zixuan; Finley, James M
2018-05-09
Successful negotiation of obstacles during walking relies on the integration of visual information about the environment with ongoing locomotor commands. When information about the body and environment are removed through occlusion of the lower visual field, individuals increase downward head pitch angle, reduce foot placement precision, and increase safety margins during crossing. However, whether these effects are mediated by loss of visual information about the lower extremities, the obstacle, or both remains to be seen. Here, we used a fully immersive, virtual obstacle negotiation task to investigate how visual information about the lower extremities is integrated with information about the environment to facilitate skillful obstacle negotiation. Participants stepped over virtual obstacles while walking on a treadmill with one of three types of visual feedback about the lower extremities: no feedback, end-point feedback, or a link-segment model. We found that absence of visual information about the lower extremities led to an increase in the variability of leading foot placement after crossing. The presence of a visual representation of the lower extremities promoted greater downward head pitch angle during the approach to and subsequent crossing of an obstacle. In addition, having greater downward head pitch was associated with closer placement of the trailing foot to the obstacle, further placement of the leading foot after the obstacle, and higher trailing foot clearance. These results demonstrate that the fidelity of visual information about the lower extremities influences both feed-forward and feedback aspects of visuomotor coordination during obstacle negotiation.
Effects of local myopic defocus on refractive development in monkeys.
Smith, Earl L; Hung, Li-Fang; Huang, Juan; Arumugam, Baskar
2013-11-01
Visual signals that produce myopia are mediated by local, regionally selective mechanisms. However, little is known about spatial integration for signals that slow eye growth. The purpose of this study was to determine whether the effects of myopic defocus are integrated in a local manner in primates. Beginning at 24 ± 2 days of age, seven rhesus monkeys were reared with monocular spectacles that produced 3 diopters (D) of relative myopic defocus in the nasal visual field of the treated eye but allowed unrestricted vision in the temporal field (NF monkeys). Seven monkeys were reared with monocular +3 D lenses that produced relative myopic defocus across the entire field of view (FF monkeys). Comparison data from previous studies were available for 11 control monkeys, 8 monkeys that experienced 3 D of hyperopic defocus in the nasal field, and 6 monkeys exposed to 3 D of hyperopic defocus across the entire field. Refractive development, corneal power, and axial dimensions were assessed at 2- to 4-week intervals using retinoscopy, keratometry, and ultrasonography, respectively. Eye shape was assessed using magnetic resonance imaging. In response to full-field myopic defocus, the FF monkeys developed compensating hyperopic anisometropia, the degree of which was relatively constant across the horizontal meridian. In contrast, the NF monkeys exhibited compensating hyperopic changes in refractive error that were greatest in the nasal visual field. The changes in the pattern of peripheral refractions in the NF monkeys reflected interocular differences in vitreous chamber shape. As with form deprivation and hyperopic defocus, the effects of myopic defocus are mediated by mechanisms that integrate visual signals in a local, regionally selective manner in primates. These results are in agreement with the hypothesis that peripheral vision can influence eye shape and potentially central refractive error in a manner that is independent of central visual experience.
Wright, Nathaniel C; Wessel, Ralf
2017-10-01
A primary goal of systems neuroscience is to understand cortical function, typically by studying spontaneous and stimulus-modulated cortical activity. Mounting evidence suggests a strong and complex relationship exists between the ongoing and stimulus-modulated cortical state. To date, most work in this area has been based on spiking in populations of neurons. While advantageous in many respects, this approach is limited in scope: it records the activity of a minority of neurons and gives no direct indication of the underlying subthreshold dynamics. Membrane potential recordings can fill these gaps in our understanding, but stable recordings are difficult to obtain in vivo. Here, we recorded subthreshold cortical visual responses in the ex vivo turtle eye-attached whole brain preparation, which is ideally suited for such a study. We found that, in the absence of visual stimulation, the network was "synchronous"; neurons displayed network-mediated transitions between hyperpolarized (Down) and depolarized (Up) membrane potential states. The prevalence of these slow-wave transitions varied across turtles and recording sessions. Visual stimulation evoked similar Up states, which were on average larger and less reliable when the ongoing state was more synchronous. Responses were muted when immediately preceded by large, spontaneous Up states. Evoked spiking was sparse, highly variable across trials, and mediated by concerted synaptic inputs that were, in general, only very weakly correlated with inputs to nearby neurons. Together, these results highlight the multiplexed influence of the cortical network on the spontaneous and sensory-evoked activity of individual cortical neurons. NEW & NOTEWORTHY Most studies of cortical activity focus on spikes. Subthreshold membrane potential recordings can provide complementary insight, but stable recordings are difficult to obtain in vivo. Here, we recorded the membrane potentials of cortical neurons during ongoing and visually evoked activity. We observed a strong relationship between network and single-neuron evoked activity spanning multiple temporal scales. The membrane potential perspective of cortical dynamics thus highlights the influence of intrinsic network properties on visual processing. Copyright © 2017 the American Physiological Society.
Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.
Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D
2013-10-01
Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.
Pareidolias: complex visual illusions in dementia with Lewy bodies.
Uchiyama, Makoto; Nishio, Yoshiyuki; Yokoi, Kayoko; Hirayama, Kazumi; Imamura, Toru; Shimomura, Tatsuo; Mori, Etsuro
2012-08-01
Patients rarely experience visual hallucinations while being observed by clinicians. Therefore, instruments to detect visual hallucinations directly from patients are needed. Pareidolias, which are complex visual illusions involving ambiguous forms that are perceived as meaningful objects, are analogous to visual hallucinations and have the potential to be a surrogate indicator of visual hallucinations. In this study, we explored the clinical utility of a newly developed instrument for evoking pareidolic illusions, the Pareidolia test, in patients with dementia with Lewy bodies-one of the most common causes of visual hallucinations in the elderly. Thirty-four patients with dementia with Lewy bodies, 34 patients with Alzheimer's disease and 26 healthy controls were given the Pareidolia test. Patients with dementia with Lewy bodies produced a much greater number of pareidolic illusions compared with those with Alzheimer's disease or controls. A receiver operating characteristic analysis demonstrated that the number of pareidolias differentiated dementia with Lewy bodies from Alzheimer's disease with a sensitivity of 100% and a specificity of 88%. Full-length figures and faces of people and animals accounted for >80% of the contents of pareidolias. Pareidolias were observed in patients with dementia with Lewy bodies who had visual hallucinations as well as those who did not have visual hallucinations, suggesting that pareidolias do not reflect visual hallucinations themselves but may reflect susceptibility to visual hallucinations. A sub-analysis of patients with dementia with Lewy bodies who were or were not treated with donepzil demonstrated that the numbers of pareidolias were correlated with visuoperceptual abilities in the former and with indices of hallucinations and delusional misidentifications in the latter. Arousal and attentional deficits mediated by abnormal cholinergic mechanisms and visuoperceptual dysfunctions are likely to contribute to the development of visual hallucinations and pareidolias in dementia with Lewy bodies.
Pareidolias: complex visual illusions in dementia with Lewy bodies
Uchiyama, Makoto; Yokoi, Kayoko; Hirayama, Kazumi; Imamura, Toru; Shimomura, Tatsuo; Mori, Etsuro
2012-01-01
Patients rarely experience visual hallucinations while being observed by clinicians. Therefore, instruments to detect visual hallucinations directly from patients are needed. Pareidolias, which are complex visual illusions involving ambiguous forms that are perceived as meaningful objects, are analogous to visual hallucinations and have the potential to be a surrogate indicator of visual hallucinations. In this study, we explored the clinical utility of a newly developed instrument for evoking pareidolic illusions, the Pareidolia test, in patients with dementia with Lewy bodies—one of the most common causes of visual hallucinations in the elderly. Thirty-four patients with dementia with Lewy bodies, 34 patients with Alzheimer’s disease and 26 healthy controls were given the Pareidolia test. Patients with dementia with Lewy bodies produced a much greater number of pareidolic illusions compared with those with Alzheimer’s disease or controls. A receiver operating characteristic analysis demonstrated that the number of pareidolias differentiated dementia with Lewy bodies from Alzheimer’s disease with a sensitivity of 100% and a specificity of 88%. Full-length figures and faces of people and animals accounted for >80% of the contents of pareidolias. Pareidolias were observed in patients with dementia with Lewy bodies who had visual hallucinations as well as those who did not have visual hallucinations, suggesting that pareidolias do not reflect visual hallucinations themselves but may reflect susceptibility to visual hallucinations. A sub-analysis of patients with dementia with Lewy bodies who were or were not treated with donepzil demonstrated that the numbers of pareidolias were correlated with visuoperceptual abilities in the former and with indices of hallucinations and delusional misidentifications in the latter. Arousal and attentional deficits mediated by abnormal cholinergic mechanisms and visuoperceptual dysfunctions are likely to contribute to the development of visual hallucinations and pareidolias in dementia with Lewy bodies. PMID:22649179
Observers' cognitive states modulate how visual inputs relate to gaze control.
Kardan, Omid; Henderson, John M; Yourganov, Grigori; Berman, Marc G
2016-09-01
Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Repetition priming of face recognition in a serial choice reaction-time task.
Roberts, T; Bruce, V
1989-05-01
Marshall & Walker (1987) found that pictorial stimuli yield visual priming that is disrupted by an unpredictable visual event in the response-stimulus interval. They argue that visual stimuli are represented in memory in the form of distinct visual and object codes. Bruce & Young (1986) propose similar pictorial, structural and semantic codes which mediate the recognition of faces, yet repetition priming results obtained with faces as stimuli (Bruce & Valentine, 1985), and with objects (Warren & Morton, 1982) are quite different from those of Marshall & Walker (1987), in the sense that recognition is facilitated by pictures presented 20 minutes earlier. The experiment reported here used different views of familiar and unfamiliar faces as stimuli in a serial choice reaction-time task and found that, with identical pictures, repetition priming survives and intervening item requiring a response, with both familiar and unfamiliar faces. Furthermore, with familiar faces such priming was present even when the view of the prime was different from the target. The theoretical implications of these results are discussed.
Palczewska, Grazyna; Maeda, Tadao; Imanishi, Yoshikazu; Sun, Wenyu; Chen, Yu; Williams, David R.; Piston, David; Maeda, Akiko; Palczewski, Krzysztof
2010-01-01
Multi–photon excitation fluorescence microscopy (MPM) can image certain molecular processes in vivo. In the eye, fluorescent retinyl esters in sub–cellular structures called retinosomes mediate regeneration of the visual chromophore, 11–cis–retinal, by the visual cycle. But harmful fluorescent condensation products were also identified previously. We report that in wild type mice, excitation with λ ~730 nm identified retinosomes in the retinal pigment epithelium, whereas excitation with λ ~910 nm revealed at least one additional retinal fluorophore. The latter fluorescence was absent in eyes of genetically modified mice lacking a functional visual cycle, but accentuated in eyes of older WT mice and mice with defective clearance of all–trans–retinal, an intermediate in the visual cycle. MPM, a noninvasive imaging modality that facilitates concurrent monitoring of retinosomes along with potentially harmful products in aging eyes, has the potential to detect early molecular changes due to age–related macular degeneration and other defects in retinoid metabolism. PMID:21076393
Age-related changes in event-cued visual and auditory prospective memory proper.
Uttl, Bob
2006-06-01
We rely upon prospective memory proper (ProMP) to bring back to awareness previously formed plans and intentions at the right place and time, and to enable us to act upon those plans and intentions. To examine age-related changes in ProMP, younger and older participants made decisions about simple stimuli (ongoing task) and at the same time were required to respond to a ProM cue, either a picture (visually cued ProM test) or a sound (auditorily cued ProM test), embedded in a simultaneously presented series of similar stimuli (either pictures or sounds). The cue display size or loudness increased across trials until a response was made. The cue size and cue loudness at the time of response indexed ProMP. The main results showed that both visual and auditory ProMP declined with age, and that such declines were mediated by age declines in sensory functions (visual acuity and hearing level), processing resources, working memory, intelligence, and ongoing task resource allocation.
Progress in high-level exploratory vision
NASA Astrophysics Data System (ADS)
Brand, Matthew
1993-08-01
We have been exploring the hypothesis that vision is an explanatory process, in which causal and functional reasoning about potential motion plays an intimate role in mediating the activity of low-level visual processes. In particular, we have explored two of the consequences of this view for the construction of purposeful vision systems: Causal and design knowledge can be used to (1) drive focus of attention, and (2) choose between ambiguous image interpretations. An important result of visual understanding is an explanation of the scene's causal structure: How action is originated, constrained, and prevented, and what will happen in the immediate future. In everyday visual experience, most action takes the form of motion, and most causal analysis takes the form of dynamical analysis. This is even true of static scenes, where much of a scene's interest lies in how possible motions are arrested. This paper describes our progress in developing domain theories and visual processes for the understanding of various kinds of structured scenes, including structures built out of children's constructive toys and simple mechanical devices.
Audiovisual Association Learning in the Absence of Primary Visual Cortex.
Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice
2015-01-01
Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.
A magnetic tether system to investigate visual and olfactory mediated flight control in Drosophila.
Duistermars, Brian J; Frye, Mark
2008-11-21
It has been clear for many years that insects use visual cues to stabilize their heading in a wind stream. Many animals track odors carried in the wind. As such, visual stabilization of upwind tracking directly aids in odor tracking. But do olfactory signals directly influence visual tracking behavior independently from wind cues? Also, the recent deluge of research on the neurophysiology and neurobehavioral genetics of olfaction in Drosophila has motivated ever more technically sophisticated and quantitative behavioral assays. Here, we modified a magnetic tether system originally devised for vision experiments by equipping the arena with narrow laminar flow odor plumes. A fly is glued to a small steel pin and suspended in a magnetic field that enables it to yaw freely. Small diameter food odor plumes are directed downward over the fly's head, eliciting stable tracking by a hungry fly. Here we focus on the critical mechanics of tethering, aligning the magnets, devising the odor plume, and confirming stable odor tracking.
A neural circuit for gamma-band coherence across the retinotopic map in mouse visual cortex
Hakim, Richard; Shamardani, Kiarash
2018-01-01
Cortical gamma oscillations have been implicated in a variety of cognitive, behavioral, and circuit-level phenomena. However, the circuit mechanisms of gamma-band generation and synchronization across cortical space remain uncertain. Using optogenetic patterned illumination in acute brain slices of mouse visual cortex, we define a circuit composed of layer 2/3 (L2/3) pyramidal cells and somatostatin (SOM) interneurons that phase-locks ensembles across the retinotopic map. The network oscillations generated here emerge from non-periodic stimuli, and are stimulus size-dependent, coherent across cortical space, narrow band (30 Hz), and depend on SOM neuron but not parvalbumin (PV) neuron activity; similar to visually induced gamma oscillations observed in vivo. Gamma oscillations generated in separate cortical locations exhibited high coherence as far apart as 850 μm, and lateral gamma entrainment depended on SOM neuron activity. These data identify a circuit that is sufficient to mediate long-range gamma-band coherence in the primary visual cortex. PMID:29480803
Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten
2010-01-01
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.
The scope and control of attention as separate aspects of working memory.
Shipstead, Zach; Redick, Thomas S; Hicks, Kenny L; Engle, Randall W
2012-01-01
The present study examines two varieties of working memory (WM) capacity task: visual arrays (i.e., a measure of the amount of information that can be maintained in working memory) and complex span (i.e., a task that taps WM-related attentional control). Using previously collected data sets we employ confirmatory factor analysis to demonstrate that visual arrays and complex span tasks load on separate, but correlated, factors. A subsequent series of structural equation models and regression analyses demonstrate that these factors contribute both common and unique variance to the prediction of general fluid intelligence (Gf). However, while visual arrays does contribute uniquely to higher cognition, its overall correlation to Gf is largely mediated by variance associated with the complex span factor. Thus we argue that visual arrays performance is not strictly driven by a limited-capacity storage system (e.g., the focus of attention; Cowan, 2001), but may also rely on control processes such as selective attention and controlled memory search.
Daenen, Liesbeth; Nijs, Jo; Roussel, Nathalie; Wouters, Kristien; Cras, Patrick
2012-01-01
Sensory and motor system dysfunctions have been documented in a proportion of patients with acute whiplash associated disorders (WAD). Sensorimotor incongruence may occur and hence, may explain pain and other sensations in the acute stage after the trauma. The present study aimed at (1) evaluating whether a visually mediated incongruence between sensory feedback and motor output increases symptoms and triggers additional sensations in patients with acute WAD, (2) investigating whether the pattern of sensations in response to sensorimotor incongruence differs among patients suffering from acute and chronic WAD, and healthy controls. Experimental study. Patients with acute WAD were recruited within one month after whiplash injury via the emergency department of a local Red Cross medical care unit, the Antwerp University Hospital, and through primary care practices. Patients with chronic WAD were recruited through an advertisement on the World Wide Web and from the medical database of a local Red Cross medical care unit. Healthy controls were recruited from among the university college staff, family members, and acquaintances of the researchers. Thirty patients with acute WAD, 35 patients with chronic WAD, and 31 healthy persons were subjected to a coordination test. They performed congruent and incongruent arm movements while viewing a whiteboard or mirror. RESULTS. Twenty-eight patients with acute WAD reported sensations such as pain, tightness, feeling of peculiarity, and tiredness at some stage of the test protocol. No significant differences in frequencies and intensities of sensations were found between the various test stages (P > .05). Significantly more sensations were reported during the incongruent mirror stage compared to the incongruent control stage (P < .05). The pattern in intensity of sensations across the congruent and incongruent stages was significantly different between the WAD groups and the control group. The course and prognostic value of susceptibility to sensorimotor incongruence after an acute whiplash trauma are not yet clear from these results. A prospective longitudinal study with an expanded study population is needed to investigate if those with a lowered threshold to visually mediated sensorimotor incongruence in the acute stage are at risk to develop persistent pain and disability. Patients with acute WAD present an exacerbation of symptoms and additional sensations in response to visually mediated changes during action. These results indicate an altered perception of distorted visual feedback and suggest altered central sensorimotor nervous system processing in patients with acute WAD.
Dichotic and dichoptic digit perception in normal adults.
Lawfield, Angela; McFarland, Dennis J; Cacace, Anthony T
2011-06-01
Verbally based dichotic-listening experiments and reproduction-mediated response-selection strategies have been used for over four decades to study perceptual/cognitive aspects of auditory information processing and make inferences about hemispheric asymmetries and language lateralization in the brain. Test procedures using dichotic digits have also been used to assess for disorders of auditory processing. However, with this application, limitations exist and paradigms need to be developed to improve specificity of the diagnosis. Use of matched tasks in multiple sensory modalities is a logical approach to address this issue. Herein, we use dichotic listening and dichoptic viewing of visually presented digits for making this comparison. To evaluate methodological issues involved in using matched tasks of dichotic listening and dichoptic viewing in normal adults. A multivariate assessment of the effects of modality (auditory vs. visual), digit-span length (1-3 pairs), response selection (recognition vs. reproduction), and ear/visual hemifield of presentation (left vs. right) on dichotic and dichoptic digit perception. Thirty adults (12 males, 18 females) ranging in age from 18 to 30 yr with normal hearing sensitivity and normal or corrected-to-normal visual acuity. A computerized, custom-designed program was used for all data collection and analysis. A four-way repeated measures analysis of variance (ANOVA) evaluated the effects of modality, digit-span length, response selection, and ear/visual field of presentation. The ANOVA revealed that performances on dichotic listening and dichoptic viewing tasks were dependent on complex interactions between modality, digit-span length, response selection, and ear/visual hemifield of presentation. Correlation analysis suggested a common effect on overall accuracy of performance but isolated only an auditory factor for a laterality index. The variables used in this experiment affected performances in the auditory modality to a greater extent than in the visual modality. The right-ear advantage observed in the dichotic-digits task was most evident when reproduction mediated response selection was used in conjunction with three-digit pairs. This effect implies that factors such as "speech related output mechanisms" and digit-span length (working memory) contribute to laterality effects in dichotic listening performance with traditional paradigms. Thus, the use of multiple-digit pairs to avoid ceiling effects and the application of verbal reproduction as a means of response selection may accentuate the role of nonperceptual factors in performance. Ideally, tests of perceptual abilities should be relatively free of such effects. American Academy of Audiology.
Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael
2011-01-01
Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669
Chen, Juan; Sperandio, Irene; Goodale, Melvyn Alan
2018-03-19
Our brain integrates information from multiple modalities in the control of behavior. When information from one sensory source is compromised, information from another source can compensate for the loss. What is not clear is whether the nature of this multisensory integration and the re-weighting of different sources of sensory information are the same across different control systems. Here, we investigated whether proprioceptive distance information (position sense of body parts) can compensate for the loss of visual distance cues that support size constancy in perception (mediated by the ventral visual stream) [1, 2] versus size constancy in grasping (mediated by the dorsal visual stream) [3-6], in which the real-world size of an object is computed despite changes in viewing distance. We found that there was perfect size constancy in both perception and grasping in a full-viewing condition (lights on, binocular viewing) and that size constancy in both tasks was dramatically disrupted in the restricted-viewing condition (lights off; monocular viewing of the same but luminescent object through a 1-mm pinhole). Importantly, in the restricted-viewing condition, proprioceptive cues about viewing distance originating from the non-grasping limb (experiment 1) or the inclination of the torso and/or the elbow angle of the grasping limb (experiment 2) compensated for the loss of visual distance cues to enable a complete restoration of size constancy in grasping but only a modest improvement of size constancy in perception. This suggests that the weighting of different sources of sensory information varies as a function of the control system being used. Copyright © 2018 Elsevier Ltd. All rights reserved.
An object-based visual attention model for robotic applications.
Yu, Yuanlong; Mann, George K I; Gosine, Raymond G
2010-10-01
By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.
Keller, Carmen; Junghans, Alex
2017-11-01
Individuals with low numeracy have difficulties with understanding complex graphs. Combining the information-processing approach to numeracy with graph comprehension and information-reduction theories, we examined whether high numerates' better comprehension might be explained by their closer attention to task-relevant graphical elements, from which they would expect numerical information to understand the graph. Furthermore, we investigated whether participants could be trained in improving their attention to task-relevant information and graph comprehension. In an eye-tracker experiment ( N = 110) involving a sample from the general population, we presented participants with 2 hypothetical scenarios (stomach cancer, leukemia) showing survival curves for 2 treatments. In the training condition, participants received written instructions on how to read the graph. In the control condition, participants received another text. We tracked participants' eye movements while they answered 9 knowledge questions. The sum constituted graph comprehension. We analyzed visual attention to task-relevant graphical elements by using relative fixation durations and relative fixation counts. The mediation analysis revealed a significant ( P < 0.05) indirect effect of numeracy on graph comprehension through visual attention to task-relevant information, which did not differ between the 2 conditions. Training had a significant main effect on visual attention ( P < 0.05) but not on graph comprehension ( P < 0.07). Individuals with high numeracy have better graph comprehension due to their greater attention to task-relevant graphical elements than individuals with low numeracy. With appropriate instructions, both groups can be trained to improve their graph-processing efficiency. Future research should examine (e.g., motivational) mediators between visual attention and graph comprehension to develop appropriate instructions that also result in higher graph comprehension.
Arshad, Q; Siddiqui, S; Ramachandran, S; Goga, U; Bonsu, A; Patel, M; Roberts, R E; Nigmatullina, Y; Malhotra, P; Bronstein, A M
2015-12-17
Right hemisphere dominance for visuo-spatial attention is characteristically observed in most right-handed individuals. This dominance has been attributed to both an anatomically larger right fronto-parietal network and the existence of asymmetric parietal interhemispheric connections. Previously it has been demonstrated that interhemispheric conflict, which induces left hemisphere inhibition, results in the modulation of both (i) the excitability of the early visual cortex (V1) and (ii) the brainstem-mediated vestibular-ocular reflex (VOR) via top-down control mechanisms. However to date, it remains unknown whether the degree of an individual's right hemisphere dominance for visuospatial function can influence, (i) the baseline excitability of the visual cortex and (ii) the extent to which the right hemisphere can exert top-down modulation. We directly tested this by correlating line bisection error (or pseudoneglect), taken as a measure of right hemisphere dominance, with both (i) visual cortical excitability measured using phosphene perception elicited via single-pulse occipital trans-cranial magnetic stimulation (TMS) and (ii) the degree of trans-cranial direct current stimulation (tDCS)-mediated VOR suppression, following left hemisphere inhibition. We found that those individuals with greater right hemisphere dominance had a less excitable early visual cortex at baseline and demonstrated a greater degree of vestibular nystagmus suppression following left hemisphere cathodal tDCS. To conclude, our results provide the first demonstration that individual differences in right hemisphere dominance can directly predict both the baseline excitability of low-level brain structures and the degree of top-down modulation exerted over them. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Singh, Monika; Bhoge, Rajesh K; Randhawa, Gurinderjit
2018-04-20
Background : Confirming the integrity of seed samples in powdered form is important priorto conducting a genetically modified organism (GMO) test. Rapid onsite methods may provide a technological solution to check for genetically modified (GM) events at ports of entry. In India, Bt cotton is the commercialized GM crop with four approved GM events; however, 59 GM events have been approved globally. GMO screening is required to test for authorized GM events. The identity and amplifiability of test samples could be ensured first by employing endogenous genes as an internal control. Objective : A rapid onsite detection method was developed for an endogenous reference gene, stearoyl acyl carrier protein desaturase ( Sad1 ) of cotton, employing visual and real-time loop-mediated isothermal amplification (LAMP). Methods : The assays were performed at a constant temperature of 63°C for 30 min for visual LAMP and 62ºC for 40 min for real-time LAMP. Positive amplification was visualized as a change in color from orange to green on addition of SYBR ® Green or detected as real-time amplification curves. Results : Specificity of LAMP assays was confirmed using a set of 10 samples. LOD for visual LAMP was up to 0.1%, detecting 40 target copies, and for real-time LAMP up to 0.05%, detecting 20 target copies. Conclusions : The developed methods could be utilized to confirm the integrity of seed powder prior to conducting a GMO test for specific GM events of cotton. Highlights : LAMP assays for the endogenous Sad1 gene of cotton have been developed to be used as an internal control for onsite GMO testing in cotton.
Electronic approaches to restoration of sight
NASA Astrophysics Data System (ADS)
Goetz, G. A.; Palanker, D. V.
2016-09-01
Retinal prostheses are a promising means for restoring sight to patients blinded by the gradual atrophy of photoreceptors due to retinal degeneration. They are designed to reintroduce information into the visual system by electrically stimulating surviving neurons in the retina. This review outlines the concepts and technologies behind two major approaches to retinal prosthetics: epiretinal and subretinal. We describe how the visual system responds to electrical stimulation. We highlight major differences between direct encoding of the retinal output with epiretinal stimulation, and network-mediated response with subretinal stimulation. We summarize results of pre-clinical evaluation of prosthetic visual functions in- and ex vivo, as well as the outcomes of current clinical trials of various retinal implants. We also briefly review alternative, non-electronic, approaches to restoration of sight to the blind, and conclude by suggesting some perspectives for future advancement in the field.
Electronic Approaches to Restoration of Sight
Goetz, G A; Palanker, D V
2016-01-01
Retinal prostheses are a promising means for restoring sight to patients blinded by the gradual atrophy of photoreceptors due to retinal degeneration. They are designed to reintroduce information into the visual system by electrically stimulating surviving neurons in the retina. This review outlines the concepts and technologies behind two major approaches to retinal prosthetics: epiretinal and subretinal. We describe how the visual system responds to electrical stimulation. We highlight major differences between direct encoding of the retinal output with epiretinal stimulation, and network-mediated response with subretinal stimulation. We summarize results of pre-clinical evaluation of prosthetic visual functions in- and ex-vivo, as well as the outcomes of current clinical trials of various retinal implants. We also briefly review alternative, non-electronic, approaches to restoration of sight to the blind, and conclude by suggesting some perspectives for future advancement in the field. PMID:27502748
An extended retinotopic map of mouse cortex
Zhuang, Jun; Ng, Lydia; Williams, Derric; Valley, Matthew; Li, Yang; Garrett, Marina; Waters, Jack
2017-01-01
Visual perception and behavior are mediated by cortical areas that have been distinguished using architectonic and retinotopic criteria. We employed fluorescence imaging and GCaMP6 reporter mice to generate retinotopic maps, revealing additional regions of retinotopic organization that extend into barrel and retrosplenial cortices. Aligning retinotopic maps to architectonic borders, we found a mismatch in border location, indicating that architectonic borders are not aligned with the retinotopic transition at the vertical meridian. We also assessed the representation of visual space within each region, finding that four visual areas bordering V1 (LM, P, PM and RL) display complementary representations, with overlap primarily at the central hemifield. Our results extend our understanding of the organization of mouse cortex to include up to 16 distinct retinotopically organized regions. DOI: http://dx.doi.org/10.7554/eLife.18372.001 PMID:28059700
Comic Books: A Learning Tool for Meaningful Acquisition of Written Sign Language
ERIC Educational Resources Information Center
Guimarães, Cayley; Oliveira Machado, Milton César; Fernandes, Sueli F.
2018-01-01
Deaf people use Sign Language (SL) for intellectual development, communications and other human activities that are mediated by language--such as the expression of complex and abstract thoughts and feelings; and for literature, culture and knowledge. The Brazilian Sign Language (Libras) is a complete linguistic system of visual-spatial manner,…
Individual Learning Strategies and Choice in Student-Generated Multimedia
ERIC Educational Resources Information Center
McGahan, William T.; Ernst, Hardy; Dyson, Laurel Evelyn
2016-01-01
There has been an increasing focus on student-generated multimedia assessment as a way of introducing the benefits of both visual literacy and peer-mediated learning into university courses. One such assessment was offered to first-year health science students but, contrary to expectations, led to poorer performance in their end-of-semester…
Communicational Approach to Study Textbook Discourse on the Derivative
ERIC Educational Resources Information Center
Park, Jungeun
2016-01-01
This paper investigates how three widely used calculus textbooks in the U.S. realize the derivative as a point-specific object and as a function using Sfard's communicational approach. For this purpose, the study analyzed word-use and visual mediators for the "limit process" through which the derivative at a point was objectified, and…
ERIC Educational Resources Information Center
Fielding, Rob
1989-01-01
Explicates the socio-cultural developmental theories of Vygotsky and Feuerstein which advocate teacher mediated learning in order to stimulate and accelerate development. Implications for art education include the need for the teacher to become involved in the enculturation of the child into the thinking processes and conceptual organization of…
Recognition Decisions from Visual Working Memory Are Mediated by Continuous Latent Strengths
ERIC Educational Resources Information Center
Ricker, Timothy J.; Thiele, Jonathan E.; Swagman, April R.; Rouder, Jeffrey N.
2017-01-01
Making recognition decisions often requires us to reference the contents of working memory, the information available for ongoing cognitive processing. As such, understanding how recognition decisions are made when based on the contents of working memory is of critical importance. In this work we examine whether recognition decisions based on the…
ERIC Educational Resources Information Center
Cho, Ji Young; Cho, Moon-Heum; Kozinets, Nadya
2016-01-01
With the recognition of the importance of collaboration in a design studio and the advancement of technology, increasing numbers of design students collaborate with others in a technology-mediated learning environment (TMLE); however, not all students have positive experiences in TMLEs. One possible reason for unsatisfactory collaboration…