Sample records for flexible multicamera visual-tracking

  1. User-assisted visual search and tracking across distributed multi-camera networks

    NASA Astrophysics Data System (ADS)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  2. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  3. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  4. Occlusion handling framework for tracking in smart camera networks by per-target assistance task assignment

    NASA Astrophysics Data System (ADS)

    Bo, Nyan Bo; Deboeverie, Francis; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Occlusion is one of the most difficult challenges in the area of visual tracking. We propose an occlusion handling framework to improve the performance of local tracking in a smart camera view in a multicamera network. We formulate an extensible energy function to quantify the quality of a camera's observation of a particular target by taking into account both person-person and object-person occlusion. Using this energy function, a smart camera assesses the quality of observations over all targets being tracked. When it cannot adequately observe of a target, a smart camera estimates the quality of observation of the target from view points of other assisting cameras. If a camera with better observation of the target is found, the tracking task of the target is carried out with the assistance of that camera. In our framework, only positions of persons being tracked are exchanged between smart cameras. Thus, communication bandwidth requirement is very low. Performance evaluation of our method on challenging video sequences with frequent and severe occlusions shows that the accuracy of a baseline tracker is considerably improved. We also report the performance comparison to the state-of-the-art trackers in which our method outperforms.

  5. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  6. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  7. Scalable software architecture for on-line multi-camera video processing

    NASA Astrophysics Data System (ADS)

    Camplani, Massimo; Salgado, Luis

    2011-03-01

    In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.

  8. The seam visual tracking method for large structures

    NASA Astrophysics Data System (ADS)

    Bi, Qilin; Jiang, Xiaomin; Liu, Xiaoguang; Cheng, Taobo; Zhu, Yulong

    2017-10-01

    In this paper, a compact and flexible weld visual tracking method is proposed. Firstly, there was the interference between the visual device and the work-piece to be welded when visual tracking height cannot change. a kind of weld vision system with compact structure and tracking height is researched. Secondly, according to analyze the relative spatial pose between the camera, the laser and the work-piece to be welded and study with the theory of relative geometric imaging, The mathematical model between image feature parameters and three-dimensional trajectory of the assembly gap to be welded is established. Thirdly, the visual imaging parameters of line structured light are optimized by experiment of the weld structure of the weld. Fourth, the interference that line structure light will be scatters at the bright area of metal and the area of surface scratches will be bright is exited in the imaging. These disturbances seriously affect the computational efficiency. The algorithm based on the human eye visual attention mechanism is used to extract the weld characteristics efficiently and stably. Finally, in the experiment, It is verified that the compact and flexible weld tracking method has the tracking accuracy of 0.5mm in the tracking of large structural parts. It is a wide range of industrial application prospects.

  9. Multiple-camera tracking: UK government requirements

    NASA Astrophysics Data System (ADS)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  10. Distributed Sensing and Processing for Multi-Camera Networks

    NASA Astrophysics Data System (ADS)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  11. See It With Your Own Eyes: Markerless Mobile Augmented Reality for Radiation Awareness in the Hybrid Room.

    PubMed

    Loy Rodas, Nicolas; Barrera, Fernando; Padoy, Nicolas

    2017-02-01

    We present an approach to provide awareness to the harmful ionizing radiation generated during X-ray-guided minimally invasive procedures. A hand-held screen is used to display directly in the user's view information related to radiation safety in a mobile augmented reality (AR) manner. Instead of using markers, we propose a method to track the observer's viewpoint, which relies on the use of multiple RGB-D sensors and combines equipment detection for tracking initialization with a KinectFusion-like approach for frame-to-frame tracking. Two of the sensors are ceiling-mounted and a third one is attached to the hand-held screen. The ceiling cameras keep an updated model of the room's layout, which is used to exploit context information and improve the relocalization procedure. The system is evaluated on a multicamera dataset generated inside an operating room (OR) and containing ground-truth poses of the AR display. This dataset includes a wide variety of sequences with different scene configurations, occlusions, motion in the scene, and abrupt viewpoint changes. Qualitative results illustrating the different AR visualization modes for radiation awareness provided by the system are also presented. Our approach allows the user to benefit from a large AR visualization area and permits to recover from tracking failure caused by vast motion or changes in the scene just by looking at a piece of equipment. The system enables the user to see the 3-D propagation of radiation, the medical staff's exposure, and/or the doses deposited on the patient's surface as seen through his own eyes.

  12. The role of visual attention in multiple object tracking: evidence from ERPs.

    PubMed

    Doran, Matthew M; Hoffman, James E

    2010-01-01

    We examined the role of visual attention in the multiple object tracking (MOT) task by measuring the amplitude of the N1 component of the event-related potential (ERP) to probe flashes presented on targets, distractors, or empty background areas. We found evidence that visual attention enhances targets and suppresses distractors (Experiment 1 & 3). However, we also found that when tracking load was light (two targets and two distractors), accurate tracking could be carried out without any apparent contribution from the visual attention system (Experiment 2). Our results suggest that attentional selection during MOT is flexibly determined by task demands as well as tracking load and that visual attention may not always be necessary for accurate tracking.

  13. Multi-camera real-time three-dimensional tracking of multiple flying animals

    PubMed Central

    Straw, Andrew D.; Branson, Kristin; Neumann, Titus R.; Dickinson, Michael H.

    2011-01-01

    Automated tracking of animal movement allows analyses that would not otherwise be possible by providing great quantities of data. The additional capability of tracking in real time—with minimal latency—opens up the experimental possibility of manipulating sensory feedback, thus allowing detailed explorations of the neural basis for control of behaviour. Here, we describe a system capable of tracking the three-dimensional position and body orientation of animals such as flies and birds. The system operates with less than 40 ms latency and can track multiple animals simultaneously. To achieve these results, a multi-target tracking algorithm was developed based on the extended Kalman filter and the nearest neighbour standard filter data association algorithm. In one implementation, an 11-camera system is capable of tracking three flies simultaneously at 60 frames per second using a gigabit network of nine standard Intel Pentium 4 and Core 2 Duo computers. This manuscript presents the rationale and details of the algorithms employed and shows three implementations of the system. An experiment was performed using the tracking system to measure the effect of visual contrast on the flight speed of Drosophila melanogaster. At low contrasts, speed is more variable and faster on average than at high contrasts. Thus, the system is already a useful tool to study the neurobiology and behaviour of freely flying animals. If combined with other techniques, such as ‘virtual reality’-type computer graphics or genetic manipulation, the tracking system would offer a powerful new way to investigate the biology of flying animals. PMID:20630879

  14. Time-lapse photogrammetry in geomorphic studies

    NASA Astrophysics Data System (ADS)

    Eltner, Anette; Kaiser, Andreas

    2017-04-01

    Image based approaches to reconstruct the earth surface (Structure from Motion - SfM) are establishing as a standard technology for high resolution topographic data. This is amongst other advantages due to the comparatively ease of use and flexibility of data generation. Furthermore, the increased spatial resolution led to its implementation at a vast range of applications from sub-mm to tens-of-km scale. Almost fully automatic calculation of referenced digital elevation models allows for a significant increase of temporal resolution, as well, potentially up to sub-second scales. Thereby, the setup of a time-lapse multi-camera system is necessary and different aspects need to be considered: The camera array has to be temporary stable or potential movements need to be compensated by temporary stable reference targets/areas. The stability of the internal camera geometry has to be considered due to a usually significantly lower amount of images of the scene, and thus redundancy for parameter estimation, compared to more common SfM applications. Depending on the speed of surface change, synchronisation has to be very accurate. Due to the usual application in the field, changing environmental conditions important for lighting and visual range are also crucial factors to keep in mind. Besides these important considerations much potential is comprised by time-lapse photogrammetry. The integration of multi-sensor systems, e.g. using thermal cameras, enables the potential detection of other processes not visible with RGB-images solely. Furthermore, the implementation of low-cost sensors allows for a significant increase of areal coverage and their setup at locations, where a loss of the system cannot be ruled out. The usage of micro-computers offers smart camera triggering, e.g. acquiring images with increased frequency controlled by a rainfall-triggered sensor. In addition these micro-computers can enable on-site data processing, e.g. recognition of increased surface movement, and thus might be used as warning system in the case of natural hazards. A large variety of applications are suitable with time-lapse photogrammetry, i.e. change detection of all sorts; e.g. volumetric alterations, movement tracking or roughness changes. The multi-camera systems can be used for slope investigations, soil studies, glacier observation, snow cover measurement, volcanic surveillance or plant growth monitoring. A conceptual workflow is introduced highlighting the limits and potentials of time-lapse photogrammetry.

  15. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  16. TENTACLE Multi-Camera Immersive Surveillance System Phase 2

    DTIC Science & Technology

    2015-04-16

    successful in solving the most challenging video analytics problems and taking the advanced research concepts into working systems for end- users in both...commercial, space and military applications. Notable successes include winning the DARPA Urban Challenge , software autonomy to guide the NASA robots (spirit... challenging urban environments. CMU is developing a scalable and extensible architecture, improving search/pursuit/tracking capabilities, and addressing

  17. Sustained multifocal attentional enhancement of stimulus processing in early visual areas predicts tracking performance.

    PubMed

    Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K

    2013-03-20

    Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.

  18. Magneto-optic tracking of a flexible laparoscopic ultrasound transducer for laparoscope augmentation.

    PubMed

    Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Schneider, Armin; Feussner, Hubertus; Navabi, Nassir

    2007-01-01

    In abdominal surgery, a laparoscopic ultrasound transducer is commonly used to detect lesions such as metastases. The determination and visualization of position and orientation of its flexible tip in relation to the patient or other surgical instruments can be of much help to (novice) surgeons utilizing the transducer intraoperatively. This difficult subject has recently been paid attention to by the scientific community . Electromagnetic tracking systems can be applied to track the flexible tip. However, the magnetic field can be distorted by ferromagnetic material. This paper presents a new method based on optical tracking of the laparoscope and magneto-optic tracking of the transducer, which is able to automatically detect field distortions. This is used for a smooth augmentation of the B-scan images of the transducer directly on the camera images in real time.

  19. Synthetic depth data creation for sensor setup planning and evaluation of multi-camera multi-person trackers

    NASA Astrophysics Data System (ADS)

    Pattke, Marco; Martin, Manuel; Voit, Michael

    2017-05-01

    Tracking people with cameras in public areas is common today. However with an increasing number of cameras it becomes harder and harder to view the data manually. Especially in safety critical areas automatic image exploitation could help to solve this problem. Setting up such a system can however be difficult because of its increased complexity. Sensor placement is critical to ensure that people are detected and tracked reliably. We try to solve this problem using a simulation framework that is able to simulate different camera setups in the desired environment including animated characters. We combine this framework with our self developed distributed and scalable system for people tracking to test its effectiveness and can show the results of the tracking system in real time in the simulated environment.

  20. A Photogrammetric System for Model Attitude Measurement in Hypersonic Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Lunsford, Charles B.

    2007-01-01

    A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and photogrammetric principles for point tracking to compute model position including pitch, roll and yaw. A discussion of the constraints encountered during the design, and a review of the measurement results obtained from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.

  1. A Novel Method for Tracking Individuals of Fruit Fly Swarms Flying in a Laboratory Flight Arena.

    PubMed

    Cheng, Xi En; Qian, Zhi-Ming; Wang, Shuo Hong; Jiang, Nan; Guo, Aike; Chen, Yan Qiu

    2015-01-01

    The growing interest in studying social behaviours of swarming fruit flies, Drosophila melanogaster, has heightened the need for developing tools that provide quantitative motion data. To achieve such a goal, multi-camera three-dimensional tracking technology is the key experimental gateway. We have developed a novel tracking system for tracking hundreds of fruit flies flying in a confined cubic flight arena. In addition to the proposed tracking algorithm, this work offers additional contributions in three aspects: body detection, orientation estimation, and data validation. To demonstrate the opportunities that the proposed system offers for generating high-throughput quantitative motion data, we conducted experiments on five experimental configurations. We also performed quantitative analysis on the kinematics and the spatial structure and the motion patterns of fruit fly swarms. We found that there exists an asymptotic distance between fruit flies in swarms as the population density increases. Further, we discovered the evidence for repulsive response when the distance between fruit flies approached the asymptotic distance. Overall, the proposed tracking system presents a powerful method for studying flight behaviours of fruit flies in a three-dimensional environment.

  2. Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Lunsford, Charles B.

    2005-01-01

    A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.

  3. Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Lunsford, Charles B.

    2004-01-01

    A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.

  4. Intraoperative visualization and assessment of electromagnetic tracking error

    NASA Astrophysics Data System (ADS)

    Harish, Vinyas; Ungi, Tamas; Lasso, Andras; MacDonald, Andrew; Nanji, Sulaiman; Fichtinger, Gabor

    2015-03-01

    Electromagnetic tracking allows for increased flexibility in designing image-guided interventions, however it is well understood that electromagnetic tracking is prone to error. Visualization and assessment of the tracking error should take place in the operating room with minimal interference with the clinical procedure. The goal was to achieve this ideal in an open-source software implementation in a plug and play manner, without requiring programming from the user. We use optical tracking as a ground truth. An electromagnetic sensor and optical markers are mounted onto a stylus device, pivot calibrated for both trackers. Electromagnetic tracking error is defined as difference of tool tip position between electromagnetic and optical readings. Multiple measurements are interpolated into the thin-plate B-spline transform visualized in real time using 3D Slicer. All tracked devices are used in a plug and play manner through the open-source SlicerIGT and PLUS extensions of the 3D Slicer platform. Tracking error was measured multiple times to assess reproducibility of the method, both with and without placing ferromagnetic objects in the workspace. Results from exhaustive grid sampling and freehand sampling were similar, indicating that a quick freehand sampling is sufficient to detect unexpected or excessive field distortion in the operating room. The software is available as a plug-in for the 3D Slicer platforms. Results demonstrate potential for visualizing electromagnetic tracking error in real time for intraoperative environments in feasibility clinical trials in image-guided interventions.

  5. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  6. User-assisted video segmentation system for visual communication

    NASA Astrophysics Data System (ADS)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  7. What Visual Information Do Children and Adults Consider while Switching between Tasks? Eye-Tracking Investigation of Cognitive Flexibility Development

    ERIC Educational Resources Information Center

    Chevalier, Nicolas; Blaye, Agnes; Dufau, Stephane; Lucenet, Joanna

    2010-01-01

    This study investigated the visual information that children and adults consider while switching or maintaining object-matching rules. Eye movements of 5- and 6-year-old children and adults were collected with two versions of the Advanced Dimensional Change Card Sort, which requires switching between shape- and color-matching rules. In addition to…

  8. A Novel Method for Tracking Individuals of Fruit Fly Swarms Flying in a Laboratory Flight Arena

    PubMed Central

    Cheng, Xi En; Qian, Zhi-Ming; Wang, Shuo Hong; Jiang, Nan; Guo, Aike; Chen, Yan Qiu

    2015-01-01

    The growing interest in studying social behaviours of swarming fruit flies, Drosophila melanogaster, has heightened the need for developing tools that provide quantitative motion data. To achieve such a goal, multi-camera three-dimensional tracking technology is the key experimental gateway. We have developed a novel tracking system for tracking hundreds of fruit flies flying in a confined cubic flight arena. In addition to the proposed tracking algorithm, this work offers additional contributions in three aspects: body detection, orientation estimation, and data validation. To demonstrate the opportunities that the proposed system offers for generating high-throughput quantitative motion data, we conducted experiments on five experimental configurations. We also performed quantitative analysis on the kinematics and the spatial structure and the motion patterns of fruit fly swarms. We found that there exists an asymptotic distance between fruit flies in swarms as the population density increases. Further, we discovered the evidence for repulsive response when the distance between fruit flies approached the asymptotic distance. Overall, the proposed tracking system presents a powerful method for studying flight behaviours of fruit flies in a three-dimensional environment. PMID:26083385

  9. Good Features to Correlate for Visual Tracking

    NASA Astrophysics Data System (ADS)

    Gundogdu, Erhan; Alatan, A. Aydin

    2018-05-01

    During the recent years, correlation filters have shown dominant and spectacular results for visual object tracking. The types of the features that are employed in these family of trackers significantly affect the performance of visual tracking. The ultimate goal is to utilize robust features invariant to any kind of appearance change of the object, while predicting the object location as properly as in the case of no appearance change. As the deep learning based methods have emerged, the study of learning features for specific tasks has accelerated. For instance, discriminative visual tracking methods based on deep architectures have been studied with promising performance. Nevertheless, correlation filter based (CFB) trackers confine themselves to use the pre-trained networks which are trained for object classification problem. To this end, in this manuscript the problem of learning deep fully convolutional features for the CFB visual tracking is formulated. In order to learn the proposed model, a novel and efficient backpropagation algorithm is presented based on the loss function of the network. The proposed learning framework enables the network model to be flexible for a custom design. Moreover, it alleviates the dependency on the network trained for classification. Extensive performance analysis shows the efficacy of the proposed custom design in the CFB tracking framework. By fine-tuning the convolutional parts of a state-of-the-art network and integrating this model to a CFB tracker, which is the top performing one of VOT2016, 18% increase is achieved in terms of expected average overlap, and tracking failures are decreased by 25%, while maintaining the superiority over the state-of-the-art methods in OTB-2013 and OTB-2015 tracking datasets.

  10. Recent experiences with implementing a video based six degree of freedom measurement system for airplane models in a 20 foot diameter vertical spin tunnel

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Jones, Stephen B.; Fremaux, Charles M.

    1993-01-01

    A model space positioning system (MSPS), a state-of-the-art, real-time tracking system to provide the test engineer with on line model pitch and spin rate information, is described. It is noted that the six-degree-of-freedom post processor program will require additional programming effort both in the automated tracking mode for high spin rates and in accuracy to meet the measurement objectives. An independent multicamera system intended to augment the MSPS is studied using laboratory calibration methods based on photogrammetry to characterize the losses in various recording options. Data acquired to Super VHS tape encoded with Vertical Interval Time Code and transcribed to video disk are considered to be a reasonable priced choice for post editing and processing video data.

  11. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  12. SVGenes: a library for rendering genomic features in scalable vector graphic format.

    PubMed

    Etherington, Graham J; MacLean, Daniel

    2013-08-01

    Drawing genomic features in attractive and informative ways is a key task in visualization of genomics data. Scalable Vector Graphics (SVG) format is a modern and flexible open standard that provides advanced features including modular graphic design, advanced web interactivity and animation within a suitable client. SVGs do not suffer from loss of image quality on re-scaling and provide the ability to edit individual elements of a graphic on the whole object level independent of the whole image. These features make SVG a potentially useful format for the preparation of publication quality figures including genomic objects such as genes or sequencing coverage and for web applications that require rich user-interaction with the graphical elements. SVGenes is a Ruby-language library that uses SVG primitives to render typical genomic glyphs through a simple and flexible Ruby interface. The library implements a simple Page object that spaces and contains horizontal Track objects that in turn style, colour and positions features within them. Tracks are the level at which visual information is supplied providing the full styling capability of the SVG standard. Genomic entities like genes, transcripts and histograms are modelled in Glyph objects that are attached to a track and take advantage of SVG primitives to render the genomic features in a track as any of a selection of defined glyphs. The feature model within SVGenes is simple but flexible and not dependent on particular existing gene feature formats meaning graphics for any existing datasets can easily be created without need for conversion. The library is provided as a Ruby Gem from https://rubygems.org/gems/bio-svgenes under the MIT license, and open source code is available at https://github.com/danmaclean/bioruby-svgenes also under the MIT License. dan.maclean@tsl.ac.uk.

  13. Robust visual tracking via multiscale deep sparse networks

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo

    2017-04-01

    In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.

  14. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    PubMed Central

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-01-01

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331

  15. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    PubMed

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-04-08

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  16. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  17. Fisheye Multi-Camera System Calibration for Surveying Narrow and Complex Architectures

    NASA Astrophysics Data System (ADS)

    Perfetti, L.; Polari, C.; Fassi, F.

    2018-05-01

    Narrow spaces and passages are not a rare encounter in cultural heritage, the shape and extension of those areas place a serious challenge on any techniques one may choose to survey their 3D geometry. Especially on techniques that make use of stationary instrumentation like terrestrial laser scanning. The ratio between space extension and cross section width of many corridors and staircases can easily lead to distortions/drift of the 3D reconstruction because of the problem of propagation of uncertainty. This paper investigates the use of fisheye photogrammetry to produce the 3D reconstruction of such spaces and presents some tests to contain the degree of freedom of the photogrammetric network, thereby containing the drift of long data set as well. The idea is that of employing a multi-camera system composed of several fisheye cameras and to implement distances and relative orientation constraints, as well as the pre-calibration of the internal parameters for each camera, within the bundle adjustment. For the beginning of this investigation, we used the NCTech iSTAR panoramic camera as a rigid multi-camera system. The case study of the Amedeo Spire of the Milan Cathedral, that encloses a spiral staircase, is the stage for all the tests. Comparisons have been made between the results obtained with the multi-camera configuration, the auto-stitched equirectangular images and a data set obtained with a monocular fisheye configuration using a full frame DSLR. Results show improved accuracy, down to millimetres, using a rigidly constrained multi-camera.

  18. Re-identification of persons in multi-camera surveillance under varying viewpoints and illumination

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Borsboom, Sander; den Hollander, Richard J. M.; Landsmeer, Sander H.; Worring, Marcel

    2012-06-01

    The capability to track individuals in CCTV cameras is important for surveillance and forensics alike. However, it is laborious to do over multiple cameras. Therefore, an automated system is desirable. In literature several methods have been proposed, but their robustness against varying viewpoints and illumination is limited. Hence performance in realistic settings is also limited. In this paper, we present a novel method for the automatic re-identification of persons in video from surveillance cameras in a realistic setting. The method is computationally efficient, robust to a wide variety of viewpoints and illumination, simple to implement and it requires no training. We compare the performance of our method to several state-of-the-art methods on a publically available dataset that contains the variety of viewpoints and illumination to allow benchmarking. The results indicate that our method shows good performance and enables a human operator to track persons five times faster.

  19. Flight of the dragonflies and damselflies.

    PubMed

    Bomphrey, Richard J; Nakata, Toshiyuki; Henningsson, Per; Lin, Huai-Ti

    2016-09-26

    This work is a synthesis of our current understanding of the mechanics, aerodynamics and visually mediated control of dragonfly and damselfly flight, with the addition of new experimental and computational data in several key areas. These are: the diversity of dragonfly wing morphologies, the aerodynamics of gliding flight, force generation in flapping flight, aerodynamic efficiency, comparative flight performance and pursuit strategies during predatory and territorial flights. New data are set in context by brief reviews covering anatomy at several scales, insect aerodynamics, neuromechanics and behaviour. We achieve a new perspective by means of a diverse range of techniques, including laser-line mapping of wing topographies, computational fluid dynamics simulations of finely detailed wing geometries, quantitative imaging using particle image velocimetry of on-wing and wake flow patterns, classical aerodynamic theory, photography in the field, infrared motion capture and multi-camera optical tracking of free flight trajectories in laboratory environments. Our comprehensive approach enables a novel synthesis of datasets and subfields that integrates many aspects of flight from the neurobiology of the compound eye, through the aeromechanical interface with the surrounding fluid, to flight performance under cruising and higher-energy behavioural modes.This article is part of the themed issue 'Moving in a moving medium: new perspectives on flight'. © 2016 The Authors.

  20. Flight of the dragonflies and damselflies

    PubMed Central

    Nakata, Toshiyuki; Henningsson, Per; Lin, Huai-Ti

    2016-01-01

    This work is a synthesis of our current understanding of the mechanics, aerodynamics and visually mediated control of dragonfly and damselfly flight, with the addition of new experimental and computational data in several key areas. These are: the diversity of dragonfly wing morphologies, the aerodynamics of gliding flight, force generation in flapping flight, aerodynamic efficiency, comparative flight performance and pursuit strategies during predatory and territorial flights. New data are set in context by brief reviews covering anatomy at several scales, insect aerodynamics, neuromechanics and behaviour. We achieve a new perspective by means of a diverse range of techniques, including laser-line mapping of wing topographies, computational fluid dynamics simulations of finely detailed wing geometries, quantitative imaging using particle image velocimetry of on-wing and wake flow patterns, classical aerodynamic theory, photography in the field, infrared motion capture and multi-camera optical tracking of free flight trajectories in laboratory environments. Our comprehensive approach enables a novel synthesis of datasets and subfields that integrates many aspects of flight from the neurobiology of the compound eye, through the aeromechanical interface with the surrounding fluid, to flight performance under cruising and higher-energy behavioural modes. This article is part of the themed issue ‘Moving in a moving medium: new perspectives on flight’. PMID:27528779

  1. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  2. An intelligent space for mobile robot localization using a multi-camera system.

    PubMed

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  3. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    PubMed Central

    Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.

    2014-01-01

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009

  4. Detection of gait characteristics for scene registration in video surveillance system.

    PubMed

    Havasi, László; Szlávik, Zoltán; Szirányi, Tamás

    2007-02-01

    This paper presents a robust walk-detection algorithm, based on our symmetry approach which can be used to extract gait characteristics from video-image sequences. To obtain a useful descriptor of a walking person, we temporally track the symmetries of a person's legs. Our method is suitable for use in indoor or outdoor surveillance scenes. Determining the leading leg of the walking subject is important, and the presented method can identify this from two successive walk steps (one walk cycle). We tested the accuracy of the presented walk-detection method in a possible application: Image registration methods are presented which are applicable to multicamera systems viewing human subjects in motion.

  5. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  6. Active contour-based visual tracking by integrating colors, shapes, and motions.

    PubMed

    Hu, Weiming; Zhou, Xue; Li, Wei; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen

    2013-05-01

    In this paper, we present a framework for active contour-based visual tracking using level sets. The main components of our framework include contour-based tracking initialization, color-based contour evolution, adaptive shape-based contour evolution for non-periodic motions, dynamic shape-based contour evolution for periodic motions, and the handling of abrupt motions. For the initialization of contour-based tracking, we develop an optical flow-based algorithm for automatically initializing contours at the first frame. For the color-based contour evolution, Markov random field theory is used to measure correlations between values of neighboring pixels for posterior probability estimation. For adaptive shape-based contour evolution, the global shape information and the local color information are combined to hierarchically evolve the contour, and a flexible shape updating model is constructed. For the dynamic shape-based contour evolution, a shape mode transition matrix is learnt to characterize the temporal correlations of object shapes. For the handling of abrupt motions, particle swarm optimization is adopted to capture the global motion which is applied to the contour in the current frame to produce an initial contour in the next frame.

  7. VisualEyes: a modular software system for oculomotor experimentation.

    PubMed

    Guo, Yi; Kim, Eun H; Kim, Eun; Alvarez, Tara; Alvarez, Tara L

    2011-03-25

    Eye movement studies have provided a strong foundation forming an understanding of how the brain acquires visual information in both the normal and dysfunctional brain.(1) However, development of a platform to stimulate and store eye movements can require substantial programming, time and costs. Many systems do not offer the flexibility to program numerous stimuli for a variety of experimental needs. However, the VisualEyes System has a flexible architecture, allowing the operator to choose any background and foreground stimulus, program one or two screens for tandem or opposing eye movements and stimulate the left and right eye independently. This system can significantly reduce the programming development time needed to conduct an oculomotor study. The VisualEyes System will be discussed in three parts: 1) the oculomotor recording device to acquire eye movement responses, 2) the VisualEyes software written in LabView, to generate an array of stimuli and store responses as text files and 3) offline data analysis. Eye movements can be recorded by several types of instrumentation such as: a limbus tracking system, a sclera search coil, or a video image system. Typical eye movement stimuli such as saccadic steps, vergent ramps and vergent steps with the corresponding responses will be shown. In this video report, we demonstrate the flexibility of a system to create numerous visual stimuli and record eye movements that can be utilized by basic scientists and clinicians to study healthy as well as clinical populations.

  8. A simple behaviour provides accuracy and flexibility in odour plume tracking--the robotic control of sensory-motor coupling in silkmoths.

    PubMed

    Ando, Noriyasu; Kanzaki, Ryohei

    2015-12-01

    Odour plume tracking is an essential behaviour for animal survival. A fundamental strategy for this is to move upstream and then across-stream. Male silkmoths, Bombyx mori, display this strategy as a pre-programmed sequential behaviour. They walk forward (surge) in response to the female sex pheromone and perform a zigzagging 'mating dance'. Though pre-programmed, the surge direction is modulated by bilateral olfactory input and optic flow. However, the nature of the interaction between these two sensory modalities and contribution of the resultant motor command to localizing an odour source are still unknown. We evaluated the ability of the silkmoth to localize an odour source under conditions of disturbed sensory-motor coupling, using a silkmoth-driven mobile robot. The significance of the bilateral olfaction of the moth was confirmed by inverting the olfactory input to the antennae, or its motor output. Inversion of the motor output induced consecutive circling, which was inhibited by covering the visual field of the moth. This suggests that the corollary discharge from the motor command and the reafference of self-generated optic flow generate compensatory signals to guide the surge accurately. Additionally, after inverting the olfactory input, the robot successfully tracked the odour plume by using a combination of behaviours. These results indicate that accurate guidance of the reflexive surge by integrating bilateral olfactory and visual information with innate pre-programmed behaviours increases the flexibility to track an odour plume even under disturbed circumstances. © 2015. Published by The Company of Biologists Ltd.

  9. Intermodal Attention Shifts in Multimodal Working Memory.

    PubMed

    Katus, Tobias; Grubert, Anna; Eimer, Martin

    2017-04-01

    Attention maintains task-relevant information in working memory (WM) in an active state. We investigated whether the attention-based maintenance of stimulus representations that were encoded through different modalities is flexibly controlled by top-down mechanisms that depend on behavioral goals. Distinct components of the ERP reflect the maintenance of tactile and visual information in WM. We concurrently measured tactile (tCDA) and visual contralateral delay activity (CDA) to track the attentional activation of tactile and visual information during multimodal WM. Participants simultaneously received tactile and visual sample stimuli on the left and right sides and memorized all stimuli on one task-relevant side. After 500 msec, an auditory retrocue indicated whether the sample set's tactile or visual content had to be compared with a subsequent test stimulus set. tCDA and CDA components that emerged simultaneously during the encoding phase were consistently reduced after retrocues that marked the corresponding (tactile or visual) modality as task-irrelevant. The absolute size of cue-dependent modulations was similar for the tCDA/CDA components and did not depend on the number of tactile/visual stimuli that were initially encoded into WM. Our results suggest that modality-specific maintenance processes in sensory brain regions are flexibly modulated by top-down influences that optimize multimodal WM representations for behavioral goals.

  10. Visual Tracking via Sparse and Local Linear Coding.

    PubMed

    Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan

    2015-11-01

    The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.

  11. Characterization of jellyfish turning using 3D-PTV

    NASA Astrophysics Data System (ADS)

    Xu, Nicole; Dabiri, John

    2017-11-01

    Aurelia aurita are oblate, radially symmetric jellyfish that consist of a gelatinous bell and subumbrellar muscle ring, which contracts to provide motive force. Swimming is typically modeled as a purely vertical motion; however, asymmetric activations of swim pacemakers (sensory organs that innervate the muscle at eight locations around the bell margin) result in turning and more complicated swim behaviors. More recent studies have examined flow fields around turning jellyfish, but the input/output relationship between locomotive controls and swim trajectories is unclear. To address this, bell kinematics for both straight swimming and turning are obtained using 3D particle tracking velocimetry (3D-PTV) by injecting biocompatible elastomer tags into the bell, illuminating the tank with ultraviolet light, and tracking the resulting fluorescent particles in a multi-camera setup. By understanding these kinematics in both natural and externally controlled free-swimming animals, we can connect neuromuscular control mechanisms to existing flow measurements of jellyfish turning for applications in designing more energy efficient biohybrid robots and underwater vehicles. NSF GRFP.

  12. Time-Frequency Analysis Reveals Pairwise Interactions in Insect Swarms

    NASA Astrophysics Data System (ADS)

    Puckett, James G.; Ni, Rui; Ouellette, Nicholas T.

    2015-06-01

    The macroscopic emergent behavior of social animal groups is a classic example of dynamical self-organization, and is thought to arise from the local interactions between individuals. Determining these interactions from empirical data sets of real animal groups, however, is challenging. Using multicamera imaging and tracking, we studied the motion of individual flying midges in laboratory mating swarms. By performing a time-frequency analysis of the midge trajectories, we show that the midge behavior can be segmented into two distinct modes: one that is independent and composed of low-frequency maneuvers, and one that consists of higher-frequency nearly harmonic oscillations conducted in synchrony with another midge. We characterize these pairwise interactions, and make a hypothesis as to their biological function.

  13. Using a smart wheelchair as a gaming device for floor-projected games: a mixed-reality environment for training powered-wheelchair driving skills.

    PubMed

    Secoli, R; Zondervan, D; Reinkensmeyer, D

    2012-01-01

    For children with a severe disability, such as can arise from cerebral palsy, becoming independent in mobility is a critical goal. Currently, however, driver's training for powered wheelchair use is labor intensive, requiring hand-over-hand assistance from a skilled therapist to keep the trainee safe. This paper describes the design of a mixed reality environment for semi-autonomous training of wheelchair driving skills. In this system, the wheelchair is used as the gaming input device, and users train driving skills by maneuvering through floor-projected games created with a multi-projector system and a multi-camera tracking system. A force feedback joystick assists in steering and enhances safety.

  14. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    NASA Astrophysics Data System (ADS)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  15. Control of articulated snake robot under dynamic active constraints.

    PubMed

    Kwok, Ka-Wai; Vitiello, Valentina; Yang, Guang-Zhong

    2010-01-01

    Flexible, ergonomically enhanced surgical robots have important applications to transluminal endoscopic surgery, for which path-following and dynamic shape conformance are essential. In this paper, kinematic control of a snake robot for motion stabilisation under dynamic active constraints is addressed. The main objective is to enable the robot to track the visual target accurately and steadily on deforming tissue whilst conforming to pre-defined anatomical constraints. The motion tracking can also be augmented with manual control. By taking into account the physical limits in terms of maximum frequency response of the system (manifested as a delay between the input of the manipulator and the movement of the end-effector), we show the importance of visual-motor synchronisation for performing accurate smooth pursuit movements. Detailed user experiments are performed to demonstrate the practical value of the proposed control mechanism.

  16. First clinical use of the EchoTrack guidance approach for radiofrequency ablation of thyroid gland nodules.

    PubMed

    Franz, Alfred Michael; Seitel, Alexander; Bopp, Nasrin; Erbelding, Christian; Cheray, Dominique; Delorme, Stefan; Grünwald, Frank; Korkusuz, Hüdayi; Maier-Hein, Lena

    2017-06-01

    Percutaneous radiofrequency ablation (RFA) of thyroid nodules is an alternative to surgical resection that offers the benefits of minimal scars for the patient, lower complication rates, and shorter treatment times. Ultrasound (US) is the preferred modality for guiding these procedures. The needle is usually kept within the US scanning plane to ensure needle visibility. However, this restricts flexibility in both transducer and needle movement and renders the procedure difficult, especially for inexperienced users. Existing navigation solutions often involve electromagnetic (EM) tracking, which requires placement of an external field generator (FG) in close proximity of the intervention site in order to avoid distortion of the EM field. This complicates the clinical workflow as placing the FG while ensuring that it neither restricts the physician's workspace nor affects tracking accuracy is awkward and time-consuming. The EchoTrack concept overcomes these issues by combining the US probe and the EM FG in one modality, simultaneously providing both real-time US and tracking data without requiring the placement of an external FG for tracking. We propose a system and workflow to use EchoTrack for RFA of thyroid nodules. According to our results, the overall error of the EchoTrack system resulting from errors related to tracking and calibration is below 2 mm. Navigated thyroid RFA with the proposed concept is clinically feasible. Motion of internal critical structures relative to external markers can be up to several millimeters in extreme cases. The EchoTrack concept with its simple setup, flexibility, improved needle visualization, and additional guidance information has high potential to be clinically used for thyroid RFA.

  17. Magneto-optical tracking of flexible laparoscopic ultrasound: model-based online detection and correction of magnetic tracking errors.

    PubMed

    Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Traub, Joerg; Navab, Nassir

    2009-06-01

    Electromagnetic tracking is currently one of the most promising means of localizing flexible endoscopic instruments such as flexible laparoscopic ultrasound transducers. However, electromagnetic tracking is also susceptible to interference from ferromagnetic material, which distorts the magnetic field and leads to tracking errors. This paper presents new methods for real-time online detection and reduction of dynamic electromagnetic tracking errors when localizing a flexible laparoscopic ultrasound transducer. We use a hybrid tracking setup to combine optical tracking of the transducer shaft and electromagnetic tracking of the flexible transducer tip. A novel approach of modeling the poses of the transducer tip in relation to the transducer shaft allows us to reliably detect and significantly reduce electromagnetic tracking errors. For detecting errors of more than 5 mm, we achieved a sensitivity and specificity of 91% and 93%, respectively. Initial 3-D rms error of 6.91 mm were reduced to 3.15 mm.

  18. A fast and flexible panoramic virtual reality system for behavioural and electrophysiological experiments.

    PubMed

    Takalo, Jouni; Piironen, Arto; Honkanen, Anna; Lempeä, Mikko; Aikio, Mika; Tuukkanen, Tuomas; Vähäsöyrinki, Mikko

    2012-01-01

    Ideally, neuronal functions would be studied by performing experiments with unconstrained animals whilst they behave in their natural environment. Although this is not feasible currently for most animal models, one can mimic the natural environment in the laboratory by using a virtual reality (VR) environment. Here we present a novel VR system based upon a spherical projection of computer generated images using a modified commercial data projector with an add-on fish-eye lens. This system provides equidistant visual stimulation with extensive coverage of the visual field, high spatio-temporal resolution and flexible stimulus generation using a standard computer. It also includes a track-ball system for closed-loop behavioural experiments with walking animals. We present a detailed description of the system and characterize it thoroughly. Finally, we demonstrate the VR system's performance whilst operating in closed-loop conditions by showing the movement trajectories of the cockroaches during exploratory behaviour in a VR forest.

  19. Modified method of recording and reproducing natural head position with a multicamera system and a laser level.

    PubMed

    Liu, Xiao-jing; Li, Qian-qian; Pang, Yuan-jie; Tian, Kai-yue; Xie, Zheng; Li, Zi-li

    2015-06-01

    As computer-assisted surgical design becomes increasingly popular in maxillofacial surgery, recording patients' natural head position (NHP) and reproducing it in the virtual environment are vital for preoperative design and postoperative evaluation. Our objective was to test the repeatability and accuracy of recording NHP using a multicamera system and a laser level. A laser level was used to project a horizontal reference line on a physical model, and a 3-dimensional image was obtained using a multicamera system. In surgical simulation software, the recorded NHP was reproduced in the virtual head position by registering the coordinate axes with the horizontal reference on both the frontal and lateral views. The repeatability and accuracy of the method were assessed using a gyroscopic procedure as the gold standard. The interclass correlation coefficients for pitch and roll were 0.982 (0.966, 0.991) and 0.995 (0.992, 0.998), respectively, indicating a high degree of repeatability. Regarding accuracy, the lack of agreement in orientation between the new method and the gold standard was within the ranges for pitch (-0.69°, 1.71°) and for roll (-0.92°, 1.20°); these have no clinical significance. This method of recording and reproducing NHP with a multicamera system and a laser level is repeatable, accurate, and clinically feasible. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  20. Robust pedestrian detection and tracking from a moving vehicle

    NASA Astrophysics Data System (ADS)

    Tuong, Nguyen Xuan; Müller, Thomas; Knoll, Alois

    2011-01-01

    In this paper, we address the problem of multi-person detection, tracking and distance estimation in a complex scenario using multi-cameras. Specifically, we are interested in a vision system for supporting the driver in avoiding any unwanted collision with the pedestrian. We propose an approach using Histograms of Oriented Gradients (HOG) to detect pedestrians on static images and a particle filter as a robust tracking technique to follow targets from frame to frame. Because the depth map requires expensive computation, we extract depth information of targets using Direct Linear Transformation (DLT) to reconstruct 3D-coordinates of correspondent points found by running Speeded Up Robust Features (SURF) on two input images. Using the particle filter the proposed tracker can efficiently handle target occlusions in a simple background environment. However, to achieve reliable performance in complex scenarios with frequent target occlusions and complex cluttered background, results from the detection module are integrated to create feedback and recover the tracker from tracking failures due to the complexity of the environment and target appearance model variability. The proposed approach is evaluated on different data sets both in a simple background scenario and a cluttered background environment. The result shows that, by integrating detector and tracker, a reliable and stable performance is possible even if occlusion occurs frequently in highly complex environment. A vision-based collision avoidance system for an intelligent car, as a result, can be achieved.

  1. A multi-camera system for real-time pose estimation

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  2. Interest in and perceived barriers to flexible-track residencies in general surgery: a national survey of residents and program directors.

    PubMed

    Abbett, Sarah K; Hevelone, Nathanael D; Breen, Elizabeth M; Lipsitz, Stuart R; Peyre, Sarah E; Ashley, Stanley W; Smink, Douglas S

    2011-01-01

    The American Board of Surgery now permits general surgery residents to complete their clinical training over a 6-year period. Despite this new policy, the level of interest in flexible scheduling remains undefined. We sought to determine why residents and program directors (PDs) are interested in flexible tracks and to understand implementation barriers. National survey. All United States general surgery residency programs that participate in the Association of Program Directors in Surgery listserv. PDs and categorical general surgery residents in the United States. Attitudes about flexible tracks in surgery training. A flexible track was defined as a schedule that allows residents to pursue nonclinical time during residency with resulting delay in residency completion. Of the 748 residents and 81 PDs who responded, 505 residents and 45 PDs were supportive of flexible tracks (68% vs 56%, p = 0.03). Residents and PDs both were interested in flexible tracks to pursue research (86% vs 82%, p = 0.47) and child bearing (69% vs 58%, p = 0.13), but residents were more interested in pursuing international work (74% vs 53%, p = 0.004) and child rearing (63% vs 44%, p = 0.02). Although 71% of residents believe that flexible-track residents would not be respected as the equal of other residents, only 17% of PDs indicated they would not respect flexible-track residents (p < 0.001). Most residents and PDs support flexible tracks, although they differ in their motivation and perceived barriers. This finding lends support to the new policy of the American Board of Surgery. Copyright © 2011 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  3. Intelligent video storage of visual evidences on site in fast deployment

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Bastide, Arnaud; Delaigle, Jean-Francois

    2004-07-01

    In this article we present a generic, flexible, scalable and robust approach for an intelligent real-time forensic visual system. The proposed implementation could be rapidly deployable and integrates minimum logistic support as it embeds low complexity devices (PCs and cameras) that communicate through wireless network. The goal of these advanced tools is to provide intelligent video storage of potential video evidences for fast intervention during deployment around a hazardous sector after a terrorism attack, a disaster, an air crash or before attempt of it. Advanced video analysis tools, such as segmentation and tracking are provided to support intelligent storage and annotation.

  4. Visualizing tumor evolution with the fishplot package for R.

    PubMed

    Miller, Christopher A; McMichael, Joshua; Dang, Ha X; Maher, Christopher A; Ding, Li; Ley, Timothy J; Mardis, Elaine R; Wilson, Richard K

    2016-11-07

    Massively-parallel sequencing at depth is now enabling tumor heterogeneity and evolution to be characterized in unprecedented detail. Tracking these changes in clonal architecture often provides insight into therapeutic response and resistance. In complex cases involving multiple timepoints, standard visualizations, such as scatterplots, can be difficult to interpret. Current data visualization methods are also typically manual and laborious, and often only approximate subclonal fractions. We have developed an R package that accurately and intuitively displays changes in clonal structure over time. It requires simple input data and produces illustrative and easy-to-interpret graphs suitable for diagnosis, presentation, and publication. The simplicity, power, and flexibility of this tool make it valuable for visualizing tumor evolution, and it has potential utility in both research and clinical settings. The fishplot package is available at https://github.com/chrisamiller/fishplot .

  5. A simulation study of the flight dynamics of elastic aircraft. Volume 2: Data

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Davidson, John B.; Schmidt, David K.

    1987-01-01

    The simulation experiment described addresses the effects of structural flexibility on the dynamic characteristics of a generic family of aircraft. The simulation was performed using the NASA Langley VMS simulation facility. The vehicle models were obtained as part of this research project. The simulation results include complete response data and subjective pilot ratings and comments and so allow a variety of analyses. The subjective ratings and analysis of the time histories indicate that increased flexibility can lead to increased tracking errors, degraded handling qualities, and changes in the frequency content of the pilot inputs. These results, furthermore, are significantly affected by the visual cues available to the pilot.

  6. SLAMM: Visual monocular SLAM with continuous mapping using multiple maps

    PubMed Central

    Md. Sabri, Aznul Qalid; Loo, Chu Kiong; Mansoor, Ali Mohammed

    2018-01-01

    This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM). It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor’s malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM. PMID:29702697

  7. The influence of track modelling options on the simulation of rail vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Di Gialleonardo, Egidio; Braghin, Francesco; Bruni, Stefano

    2012-09-01

    This paper investigates the effect of different models for track flexibility on the simulation of railway vehicle running dynamics on tangent and curved track. To this end, a multi-body model of the rail vehicle is defined including track flexibility effects on three levels of detail: a perfectly rigid pair of rails, a sectional track model and a three-dimensional finite element track model. The influence of the track model on the calculation of the nonlinear critical speed is pointed out and it is shown that neglecting the effect of track flexibility results in an overestimation of the critical speed by more than 10%. Vehicle response to stochastic excitation from track irregularity is also investigated, analysing the effect of track flexibility models on the vertical and lateral wheel-rail contact forces. Finally, the effect of the track model on the calculation of dynamic forces produced by wheel out-of-roundness is analysed, showing that peak dynamic loads are very sensitive to the track model used in the simulation.

  8. Multiview face detection based on position estimation over multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh

    2012-02-01

    In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.

  9. A detailed comparison of single-camera light-field PIV and tomographic PIV

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  10. A robust vision-based sensor fusion approach for real-time pose estimation.

    PubMed

    Assa, Akbar; Janabi-Sharifi, Farrokh

    2014-02-01

    Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ang; Song, Shuaiwen; Brugel, Eric

    To continuously comply with Moore’s Law, modern parallel machines become increasingly complex. Effectively tuning application performance for these machines therefore becomes a daunting task. Moreover, identifying performance bottlenecks at application and architecture level, as well as evaluating various optimization strategies, are becoming extremely difficult when the entanglement of numerous correlated factors is being presented. To tackle these challenges, we present a visual analytical model named “X”. It is intuitive and sufficiently flexible to track all the typical features of a parallel machine.

  12. Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2016-10-01

    Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.

  13. Improved Visual Cognition through Stroboscopic Training

    PubMed Central

    Appelbaum, L. Gregory; Schroeder, Julia E.; Cain, Matthew S.; Mitroff, Stephen R.

    2011-01-01

    Humans have a remarkable capacity to learn and adapt, but surprisingly little research has demonstrated generalized learning in which new skills and strategies can be used flexibly across a range of tasks and contexts. In the present work we examined whether generalized learning could result from visual–motor training under stroboscopic visual conditions. Individuals were assigned to either an experimental condition that trained with stroboscopic eyewear or to a control condition that underwent identical training with non-stroboscopic eyewear. The training consisted of multiple sessions of athletic activities during which participants performed simple drills such as throwing and catching. To determine if training led to generalized benefits, we used computerized measures to assess perceptual and cognitive abilities on a variety of tasks before and after training. Computer-based assessments included measures of visual sensitivity (central and peripheral motion coherence thresholds), transient spatial attention (a useful field of view – dual task paradigm), and sustained attention (multiple-object tracking). Results revealed that stroboscopic training led to significantly greater re-test improvement in central visual field motion sensitivity and transient attention abilities. No training benefits were observed for peripheral motion sensitivity or peripheral transient attention abilities, nor were benefits seen for sustained attention during multiple-object tracking. These findings suggest that stroboscopic training can effectively improve some, but not all aspects of visual perception and attention. PMID:22059078

  14. Fusion of locomotor maneuvers, and improving sensory capabilities, give rise to the flexible homing strikes of juvenile zebrafish

    PubMed Central

    Westphal, Rebecca E.; O'Malley, Donald M.

    2013-01-01

    At 5 days post-fertilization and 4 mm in length, zebrafish larvae are successful predators of mobile prey items. The tracking and capture of 200 μm long Paramecia requires efficient sensorimotor transformations and precise neural controls that activate axial musculature for orientation and propulsion, while coordinating jaw muscle activity to engulf them. Using high-speed imaging, we report striking changes across ontogeny in the kinematics, structure and efficacy of zebrafish feeding episodes. Most notably, the discrete tracking maneuvers used by larval fish (turns, forward swims) become fused with prey capture swims to form the continuous, fluid homing strikes of juvenile and adult zebrafish. Across this same developmental time frame, the duration of feeding episodes become much shorter, with strikes occurring at broader angles and from much greater distances than seen with larval zebrafish. Moreover, juveniles use a surprisingly diverse array of motor patterns that constitute a flexible predatory strategy. This enhances the ability of zebrafish to capture more mobile prey items such as Artemia. Visually-guided tracking is complemented by the mechanosensory lateral line system. Neomycin ablation of lateral line hair cells reduced the accuracy of strikes and overall feeding rates, especially when neomycin-treated larvae and juveniles were placed in the dark. Darkness by itself reduced the distance from which strikes were launched, as visualized by infrared imaging. Rapid growth and changing morphology, including ossification of skeletal elements and differentiation of control musculature, present challenges for sustaining and enhancing predatory capabilities. The concurrent expansion of the cerebellum and subpallium (an ancestral basal ganglia) may contribute to the emergence of juvenile homing strikes, whose ontogeny possibly mirrors a phylogenetic expansion of motor capabilities. PMID:23761739

  15. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  16. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  17. Robotically assisted ureteroscopy for kidney exploration

    NASA Astrophysics Data System (ADS)

    Talari, Hadi F.; Monfaredi, Reza; Wilson, Emmanuel; Blum, Emily; Bayne, Christopher; Peters, Craig; Zhang, Anlin; Cleary, Kevin

    2017-03-01

    Ureteroscopy is a minimally invasive procedure for diagnosis and treatment of urinary tract pathology. Ergonomic and visualization challenges as well as radiation exposure are limitations to conventional ureteroscopy. Therefore, we have developed a robotic system to "power drive" a flexible ureteroscope with 3D tip tracking and pre-operative image overlay. The proposed system was evaluated using a kidney phantom registered to pre-operative MR images. Initial experiments show the potential of the device to provide additional assistance, precision, and guidance during urology procedures.

  18. Recent Advancements in the Infrared Flow Visualization System for the NASA Ames Unitary Plan Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Garbeff, Theodore J., II; Baerny, Jennifer K.

    2017-01-01

    The following details recent efforts undertaken at the NASA Ames Unitary Plan wind tunnels to design and deploy an advanced, production-level infrared (IR) flow visualization data system. Highly sensitive IR cameras, coupled with in-line image processing, have enabled the visualization of wind tunnel model surface flow features as they develop in real-time. Boundary layer transition, shock impingement, junction flow, vortex dynamics, and buffet are routinely observed in both transonic and supersonic flow regimes all without the need of dedicated ramps in test section total temperature. Successful measurements have been performed on wing-body sting mounted test articles, semi-span floor mounted aircraft models, and sting mounted launch vehicle configurations. The unique requirements of imaging in production wind tunnel testing has led to advancements in the deployment of advanced IR cameras in a harsh test environment, robust data acquisition storage and workflow, real-time image processing algorithms, and evaluation of optimal surface treatments. The addition of a multi-camera IR flow visualization data system to the Ames UPWT has demonstrated itself to be a valuable analyses tool in the study of new and old aircraft/launch vehicle aerodynamics and has provided new insight for the evaluation of computational techniques.

  19. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  20. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  1. Stability analysis for a multi-camera photogrammetric system.

    PubMed

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-08-18

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  2. Aerial multi-camera systems: Accuracy and block triangulation issues

    NASA Astrophysics Data System (ADS)

    Rupnik, Ewelina; Nex, Francesco; Toschi, Isabella; Remondino, Fabio

    2015-03-01

    Oblique photography has reached its maturity and has now been adopted for several applications. The number and variety of multi-camera oblique platforms available on the market is continuously growing. So far, few attempts have been made to study the influence of the additional cameras on the behaviour of the image block and comprehensive revisions to existing flight patterns are yet to be formulated. This paper looks into the precision and accuracy of 3D points triangulated from diverse multi-camera oblique platforms. Its coverage is divided into simulated and real case studies. Within the simulations, different imaging platform parameters and flight patterns are varied, reflecting both current market offerings and common flight practices. Attention is paid to the aspect of completeness in terms of dense matching algorithms and 3D city modelling - the most promising application of such systems. The experimental part demonstrates the behaviour of two oblique imaging platforms in real-world conditions. A number of Ground Control Point (GCP) configurations are adopted in order to point out the sensitivity of tested imaging networks and arising block deformations. To stress the contribution of slanted views, all scenarios are compared against a scenario in which exclusively nadir images are used for evaluation.

  3. Stability Analysis for a Multi-Camera Photogrammetric System

    PubMed Central

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-01-01

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012

  4. GenomeGraphs: integrated genomic data visualization with R.

    PubMed

    Durinck, Steffen; Bullard, James; Spellman, Paul T; Dudoit, Sandrine

    2009-01-06

    Biological studies involve a growing number of distinct high-throughput experiments to characterize samples of interest. There is a lack of methods to visualize these different genomic datasets in a versatile manner. In addition, genomic data analysis requires integrated visualization of experimental data along with constantly changing genomic annotation and statistical analyses. We developed GenomeGraphs, as an add-on software package for the statistical programming environment R, to facilitate integrated visualization of genomic datasets. GenomeGraphs uses the biomaRt package to perform on-line annotation queries to Ensembl and translates these to gene/transcript structures in viewports of the grid graphics package. This allows genomic annotation to be plotted together with experimental data. GenomeGraphs can also be used to plot custom annotation tracks in combination with different experimental data types together in one plot using the same genomic coordinate system. GenomeGraphs is a flexible and extensible software package which can be used to visualize a multitude of genomic datasets within the statistical programming environment R.

  5. Eye-Hand Synergy and Intermittent Behaviors during Target-Directed Tracking with Visual and Non-visual Information

    PubMed Central

    Huang, Chien-Ting; Hwang, Ing-Shiou

    2012-01-01

    Visual feedback and non-visual information play different roles in tracking of an external target. This study explored the respective roles of the visual and non-visual information in eleven healthy volunteers who coupled the manual cursor to a rhythmically moving target of 0.5 Hz under three sensorimotor conditions: eye-alone tracking (EA), eye-hand tracking with visual feedback of manual outputs (EH tracking), and the same tracking without such feedback (EHM tracking). Tracking error, kinematic variables, and movement intermittency (saccade and speed pulse) were contrasted among tracking conditions. The results showed that EHM tracking exhibited larger pursuit gain, less tracking error, and less movement intermittency for the ocular plant than EA tracking. With the vision of manual cursor, EH tracking achieved superior tracking congruency of the ocular and manual effectors with smaller movement intermittency than EHM tracking, except that the rate precision of manual action was similar for both types of tracking. The present study demonstrated that visibility of manual consequences altered mutual relationships between movement intermittency and tracking error. The speed pulse metrics of manual output were linked to ocular tracking error, and saccade events were time-locked to the positional error of manual tracking during EH tracking. In conclusion, peripheral non-visual information is critical to smooth pursuit characteristics and rate control of rhythmic manual tracking. Visual information adds to eye-hand synchrony, underlying improved amplitude control and elaborate error interpretation during oculo-manual tracking. PMID:23236498

  6. Large-area photogrammetry based testing of wind turbine blades

    NASA Astrophysics Data System (ADS)

    Poozesh, Peyman; Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter; Harvey, Eric; Yarala, Rahul

    2017-03-01

    An optically based sensing system that can measure the displacement and strain over essentially the entire area of a utility-scale blade leads to a measurement system that can significantly reduce the time and cost associated with traditional instrumentation. This paper evaluates the performance of conventional three dimensional digital image correlation (3D DIC) and three dimensional point tracking (3DPT) approaches over the surface of wind turbine blades and proposes a multi-camera measurement system using dynamic spatial data stitching. The potential advantages for the proposed approach include: (1) full-field measurement distributed over a very large area, (2) the elimination of time-consuming wiring and expensive sensors, and (3) the need for large-channel data acquisition systems. There are several challenges associated with extending the capability of a standard 3D DIC system to measure entire surface of utility scale blades to extract distributed strain, deflection, and modal parameters. This paper only tries to address some of the difficulties including: (1) assessing the accuracy of the 3D DIC system to measure full-field distributed strain and displacement over the large area, (2) understanding the geometrical constraints associated with a wind turbine testing facility (e.g. lighting, working distance, and speckle pattern size), (3) evaluating the performance of the dynamic stitching method to combine two different fields of view by extracting modal parameters from aligned point clouds, and (4) determining the feasibility of employing an output-only system identification to estimate modal parameters of a utility scale wind turbine blade from optically measured data. Within the current work, the results of an optical measurement (one stereo-vision system) performed on a large area over a 50-m utility-scale blade subjected to quasi-static and cyclic loading are presented. The blade certification and testing is typically performed using International Electro-Technical Commission standard (IEC 61400-23). For static tests, the blade is pulled in either flap-wise or edge-wise directions to measure deflection or distributed strain at a few limited locations of a large-sized blade. Additionally, the paper explores the error associated with using a multi-camera system (two stereo-vision systems) in measuring 3D displacement and extracting structural dynamic parameters on a mock set up emulating a utility-scale wind turbine blade. The results obtained in this paper reveal that the multi-camera measurement system has the potential to identify the dynamic characteristics of a very large structure.

  7. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  8. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  9. PRIMAS: a real-time 3D motion-analysis system

    NASA Astrophysics Data System (ADS)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  10. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software

    PubMed Central

    Evangelista, Dennis J.; Ray, Dylan D.; Hedrick, Tyson L.

    2016-01-01

    ABSTRACT Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts. PMID:27444791

  11. Searching for effective forces in laboratory insect swarms

    NASA Astrophysics Data System (ADS)

    Puckett, James G.; Kelley, Douglas H.; Ouellette, Nicholas T.

    2014-04-01

    Collective animal behaviour is often modeled by systems of agents that interact via effective social forces, including short-range repulsion and long-range attraction. We search for evidence of such effective forces by studying laboratory swarms of the flying midge Chironomus riparius. Using multi-camera stereoimaging and particle-tracking techniques, we record three-dimensional trajectories for all the individuals in the swarm. Acceleration measurements show a clear short-range repulsion, which we confirm by considering the spatial statistics of the midges, but no conclusive long-range interactions. Measurements of the mean free path of the insects also suggest that individuals are on average very weakly coupled, but that they are also tightly bound to the swarm itself. Our results therefore suggest that some attractive interaction maintains cohesion of the swarms, but that this interaction is not as simple as an attraction to nearest neighbours.

  12. A Tool for the Automated Collection of Space Utilization Data: Three Dimensional Space Utilization Monitor

    NASA Technical Reports Server (NTRS)

    Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.

    2017-01-01

    Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP) and the Behavioral Health and Performance (BHP) Element are conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within the volume. NASA needs methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods exist yet many are obtrusive and require significant post-processing. ?Examplesused in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multi-camera methods ?Due to constraints of space operations many such methods are infeasible. Inertial tracking systems typically rely upon a gravity vector to normalize sensor readings,and traditional IR systems are large and require extensive calibration. ?However, multiple technologies have not been applied to space operations for these purposes. Two of these include: 3D Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) ?Depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR)

  13. Control of gaze in natural environments: effects of rewards and costs, uncertainty and memory in target selection.

    PubMed

    Hayhoe, Mary M; Matthis, Jonathan Samir

    2018-08-06

    The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.

  14. Underwater video enhancement using multi-camera super-resolution

    NASA Astrophysics Data System (ADS)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  15. All eyes on relevance: strategic allocation of attention as a result of feature-based task demands in multiple object tracking.

    PubMed

    Brockhoff, Alisa; Huff, Markus

    2016-10-01

    Multiple object tracking (MOT) plays a fundamental role in processing and interpreting dynamic environments. Regarding the type of information utilized by the observer, recent studies reported evidence for the use of object features in an automatic, low- level manner. By introducing a novel paradigm that allowed us to combine tracking with a noninterfering top-down task, we tested whether a voluntary component can regulate the deployment of attention to task-relevant features in a selective manner. In four experiments we found conclusive evidence for a task-driven selection mechanism that guides attention during tracking: The observers were able to ignore or prioritize distinct objects. They marked the distinct (cued) object (target/distractor) more or less often than other objects of the same type (targets /distractors)-but only when they had received an identification task that required them to actively process object features (cues) during tracking. These effects are discussed with regard to existing theoretical approaches to attentive tracking, gaze-cue usability as well as attentional readiness, a term that originally stems from research on attention capture and visual search. Our findings indicate that existing theories of MOT need to be adjusted to allow for flexible top-down, voluntary processing during tracking.

  16. Multilevel analysis of sports video sequences

    NASA Astrophysics Data System (ADS)

    Han, Jungong; Farin, Dirk; de With, Peter H. N.

    2006-01-01

    We propose a fully automatic and flexible framework for analysis and summarization of tennis broadcast video sequences, using visual features and specific game-context knowledge. Our framework can analyze a tennis video sequence at three levels, which provides a broad range of different analysis results. The proposed framework includes novel pixel-level and object-level tennis video processing algorithms, such as a moving-player detection taking both the color and the court (playing-field) information into account, and a player-position tracking algorithm based on a 3-D camera model. Additionally, we employ scene-level models for detecting events, like service, base-line rally and net-approach, based on a number real-world visual features. The system can summarize three forms of information: (1) all court-view playing frames in a game, (2) the moving trajectory and real-speed of each player, as well as relative position between the player and the court, (3) the semantic event segments in a game. The proposed framework is flexible in choosing the level of analysis that is desired. It is effective because the framework makes use of several visual cues obtained from the real-world domain to model important events like service, thereby increasing the accuracy of the scene-level analysis. The paper presents attractive experimental results highlighting the system efficiency and analysis capabilities.

  17. Development of a Dmt Monitor for Statistical Tracking of Gravitational-Wave Burst Triggers Generated from the Omega Pipeline

    NASA Astrophysics Data System (ADS)

    Li, Jun-Wei; Cao, Jun-Wei

    2010-04-01

    One challenge in large-scale scientific data analysis is to monitor data in real-time in a distributed environment. For the LIGO (Laser Interferometer Gravitational-wave Observatory) project, a dedicated suit of data monitoring tools (DMT) has been developed, yielding good extensibility to new data type and high flexibility to a distributed environment. Several services are provided, including visualization of data information in various forms and file output of monitoring results. In this work, a DMT monitor, OmegaMon, is developed for tracking statistics of gravitational-wave (OW) burst triggers that are generated from a specific OW burst data analysis pipeline, the Omega Pipeline. Such results can provide diagnostic information as reference of trigger post-processing and interferometer maintenance.

  18. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    NASA Astrophysics Data System (ADS)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  19. A comparison of visual and kinesthetic-tactual displays for compensatory tracking

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.

    1983-01-01

    Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under certain conditions it can be an effective alternative or supplement to visual displays. In order to understand better how KT tracking compares with visual tracking, both a critical tracking and stationary single-axis tracking tasks were conducted with and without velocity quickening. In the critical tracking task, the visual displays were superior, however, the quickened KT display was approximately equal to the unquickened visual display. In stationary tracking tasks, subjects adopted lag equalization with the quickened KT and visual displays, and mean-squared error scores were approximately equal. With the unquickened displays, subjects adopted lag-lead equalization, and the visual displays were superior. This superiority was partly due to the servomotor lag in the implementation of the KT display and partly due to modality differences.

  20. A comparison of tracking with visual and kinesthetic-tactual displays

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.

    1981-01-01

    Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under appropriate conditions it may be an effective means of providing visual workload relief. In order to better understand how KT tracking differs from visual tracking, both a critical tracking task and stationary single-axis tracking tasks were conducted with and without velocity quickening. On the critical tracking task, the visual displays were superior; however, the KT quickened display was approximately equal to the visual unquickened display. Mean squared error scores in the stationary tracking tasks for the visual and KT displays were approximately equal in the quickened conditions, and the describing functions were very similar. In the unquickened conditions, the visual display was superior. Subjects using the unquickened KT display exhibited a low frequency lead-lag that may be related to sensory adaptation.

  1. Simultaneous measurements of jellyfish bell kinematics and flow fields using PTV and PIV

    NASA Astrophysics Data System (ADS)

    Xu, Nicole; Dabiri, John

    2016-11-01

    A better understanding of jellyfish swimming can potentially improve the energy efficiency of aquatic vehicles or create biomimetic robots for ocean monitoring. Aurelia aurita is a simple oblate invertebrate composed of a flexible bell and coronal muscle, which contracts to eject water from the subumbrellar volume. Jellyfish locomotion can be studied by obtaining body kinematics or by examining the resulting fluid velocity fields using particle image velocimetry (PIV). Typically, swim kinematics are obtained by semi-manually tracking points of interest (POI) along the bell in video post-processing; simultaneous measurements of kinematics and flows involve using this semi-manual tracking method on PIV videos. However, we show that both the kinematics and flow fields can be directly visualized in 3D space by embedding phosphorescent particles in animals free-swimming in seeded environments. Particle tracking velocimetry (PTV) can then be used to calculate bell kinematics, such as pulse frequency, bell deformation, swim trajectories, and propulsive efficiency. By simultaneously tracking POI within the bell and collecting PIV data, we can further study the jellyfish's natural locomotive control mechanisms in conjunction with flow measurements. NSF GRFP.

  2. Acute effects of The Stick on strength, power, and flexibility.

    PubMed

    Mikesky, Alan E; Bahamonde, Rafael E; Stanton, Katie; Alvey, Thurman; Fitton, Tom

    2002-08-01

    The Stick is a muscle massage device used by athletes, particularly track athletes, to improve performance. The purpose of this project was to assess the acute effects of The Stick on muscle strength, power, and flexibility. Thirty collegiate athletes consented to participate in a 4-week, double-blind study, which consisted of 4 testing sessions (1 familiarization and 3 data collection) scheduled 1 week apart. During each testing session subjects performed 4 measures in the following sequence: hamstring flexibility, vertical jump, flying-start 20-yard dash, and isokinetic knee extension at 90 degrees x s(-1). Two minutes of randomly assigned intervention treatment (visualization [control], mock insensible electrical stimulation [placebo], or massage using The Stick [experimental]) was performed immediately prior to each performance measure. Statistical analyses involved single-factor repeated measures analysis of variance (ANOVA) with Fisher's Least Significant Difference post-hoc test. None of the variables measured showed an acute improvement (p < or = 0.05) immediately following treatment with The Stick.

  3. Framework and algorithms for illustrative visualizations of time-varying flows on unstructured meshes

    DOE PAGES

    Rattner, Alexander S.; Guillen, Donna Post; Joshi, Alark; ...

    2016-03-17

    Photo- and physically realistic techniques are often insufficient for visualization of fluid flow simulations, especially for 3D and time-varying studies. Substantial research effort has been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. However, a great deal of work has been reproduced in this field, as many research groups have developed specialized visualization software. Additionally, interoperability between illustrative visualization software is limited due to diverse processing and rendering architectures employed in different studies. In this investigation, a framework for illustrative visualization is proposed, and implemented in MarmotViz, a ParaViewmore » plug-in, enabling its use on a variety of computing platforms with various data file formats and mesh geometries. Region-of-interest identification and feature-tracking algorithms incorporated into this tool are described. Implementations of multiple illustrative effect algorithms are also presented to demonstrate the use and flexibility of this framework. Here, by providing an integrated framework for illustrative visualization of CFD data, MarmotViz can serve as a valuable asset for the interpretation of simulations of ever-growing scale.« less

  4. The impact of intelligence on memory and executive functions of children with temporal lobe epilepsy: Methodological concerns with clinical relevance.

    PubMed

    Rzezak, Patricia; Guimarães, Catarina A; Guerreiro, Marilisa M; Valente, Kette D

    2017-05-01

    Patients with TLE are prone to have lower IQ scores than healthy controls. Nevertheless, the impact of IQ differences is not usually considered in studies that compared the cognitive functioning of children with and without epilepsy. This study aimed to determine the effect of using IQ as a covariate on memory and attentional/executive functions of children with TLE. Thirty-eight children and adolescents with TLE and 28 healthy controls paired as to age, gender, and sociodemographic factors were evaluated with a comprehensive neuropsychological battery for memory and executive functions. The authors conducted three analyses to verify the impact of IQ scores on the other cognitive domains. First, we compared performance on cognitive tests without controlling for IQ differences between groups. Second, we performed the same analyses, but we included IQ as a confounding factor. Finally, we evaluated the predictive value of IQ on cognitive functioning. Although patients had IQ score in the normal range, they showed lower IQ scores than controls (p = 0.001). When we did not consider IQ in the analyses, patients had worse performance in verbal and visual memory (short and long-term), semantic memory, sustained, divided and selective attention, mental flexibility and mental tracking for semantic information. By using IQ as a covariate, patients showed worse performance only in verbal memory (long-term), semantic memory, sustained and divided attention and in mental flexibility. IQ was a predictor factor of verbal and visual memory (immediate and delayed), working memory, mental flexibility and mental tracking for semantic information. Intelligence level had a significant impact on memory and executive functioning of children and adolescents with TLE without intellectual disability. This finding opens the discussion of whether IQ scores should be considered when interpreting the results of differences in cognitive performance of patients with epilepsy compared to healthy volunteers. Copyright © 2017 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  5. Comparison of different detection methods for persistent multiple hypothesis tracking in wide area motion imagery

    NASA Astrophysics Data System (ADS)

    Hartung, Christine; Spraul, Raphael; Schuchert, Tobias

    2017-10-01

    Wide area motion imagery (WAMI) acquired by an airborne multicamera sensor enables continuous monitoring of large urban areas. Each image can cover regions of several square kilometers and contain thousands of vehicles. Reliable vehicle tracking in this imagery is an important prerequisite for surveillance tasks, but remains challenging due to low frame rate and small object size. Most WAMI tracking approaches rely on moving object detections generated by frame differencing or background subtraction. These detection methods fail when objects slow down or stop. Recent approaches for persistent tracking compensate for missing motion detections by combining a detection-based tracker with a second tracker based on appearance or local context. In order to avoid the additional complexity introduced by combining two trackers, we employ an alternative single tracker framework that is based on multiple hypothesis tracking and recovers missing motion detections with a classifierbased detector. We integrate an appearance-based similarity measure, merge handling, vehicle-collision tests, and clutter handling to adapt the approach to the specific context of WAMI tracking. We apply the tracking framework on a region of interest of the publicly available WPAFB 2009 dataset for quantitative evaluation; a comparison to other persistent WAMI trackers demonstrates state of the art performance of the proposed approach. Furthermore, we analyze in detail the impact of different object detection methods and detector settings on the quality of the output tracking results. For this purpose, we choose four different motion-based detection methods that vary in detection performance and computation time to generate the input detections. As detector parameters can be adjusted to achieve different precision and recall performance, we combine each detection method with different detector settings that yield (1) high precision and low recall, (2) high recall and low precision, and (3) best f-score. Comparing the tracking performance achieved with all generated sets of input detections allows us to quantify the sensitivity of the tracker to different types of detector errors and to derive recommendations for detector and parameter choice.

  6. The Flex Track: Flexible Partitioning between Low- and High-Acuity Areas of an Emergency Department

    PubMed Central

    Laker, Lauren F.; Froehle, Craig M.; Lindsell, Christopher J.; Ward, Michael J.

    2014-01-01

    Study Objective EDs with both low- and high-acuity treatment areas often have fixed allocation of resources, regardless of demand. We demonstrate the utility of discrete-event simulation to evaluate flexible partitioning between low- and high-acuity ED areas to identify the best operational strategy for subsequent implementation. Methods A discrete-event simulation was used to model patient flow through a 50-bed, urban, teaching ED that handles 85,000 patient visits annually. The ED has historically allocated ten beds to a Fast Track for low-acuity patients. We estimated the effect of a Flex Track policy, which involved switching up to five of these Fast Track beds to serving both low- and high-acuity patients, on patient waiting times. When the high-acuity beds were not at capacity, low-acuity patients were given priority access to flexible beds. Otherwise, high-acuity patients were given priority access to flexible beds. Wait times were estimated for patients by disposition and emergency severity index (ESI) score. Results A Flex Track policy using three flexible beds produced the lowest mean patient waiting of 30.9 (95% CI 30.6–31.2) minutes. The typical Fast Track approach of rigidly separating high- and low–acuity beds produced a mean patient wait time of 40.6 (95% CI 40.2–50.0) minutes, 31% higher than the three-bed Flex Track. A completely flexible ED, where all beds can accommodate any patient, produced mean wait times of 35.1 (95% CI 34.8–35.4) minutes. The results from the three-bed Flex Track scenario were robust, performing well across a range of scenarios involving higher and lower patient volumes and care durations. Conclusion Using discrete-event simulation, we have shown that adding some flexibility into bed allocation between low- and high-acuity can provide substantial reductions in overall patient waiting and a more efficient ED. PMID:24954578

  7. Experimental and Theoretical Results in Output Trajectory Redesign for Flexible Structures

    NASA Technical Reports Server (NTRS)

    Dewey, J. S.; Leang, K.; Devasia, S.

    1998-01-01

    In this paper we study the optimal redesign of output trajectories for linear invertible systems. This is particularly important for tracking control of flexible structures because the input-state trajectores, that achieve tracking of the required output may cause excessive vibrations in the structure. We pose and solve this problem, in the context of linear systems, as the minimization of a quadratic cost function. The theory is developed and applied to the output tracking of a flexible structure and experimental results are presented.

  8. XpertTrack: Precision Autonomous Measuring Device Developed for Real Time Shipments Tracker

    PubMed Central

    Viman, Liviu; Daraban, Mihai; Fizesan, Raul; Iuonas, Mircea

    2016-01-01

    This paper proposes a software and hardware solution for real time condition monitoring applications. The proposed device, called XpertTrack, exchanges data through the GPRS protocol over a GSM network and monitories temperature and vibrations of critical merchandise during commercial shipments anywhere on the globe. Another feature of this real time tracker is to provide GPS and GSM positioning with a precision of 10 m or less. In order to interpret the condition of the merchandise, the data acquisition, analysis and visualization are done with 0.1 °C accuracy for the temperature sensor, and 10 levels of shock sensitivity for the acceleration sensor. In addition to this, the architecture allows increasing the number and the types of sensors, so that companies can use this flexible solution to monitor a large percentage of their fleet. PMID:26978360

  9. XpertTrack: Precision Autonomous Measuring Device Developed for Real Time Shipments Tracker.

    PubMed

    Viman, Liviu; Daraban, Mihai; Fizesan, Raul; Iuonas, Mircea

    2016-03-10

    This paper proposes a software and hardware solution for real time condition monitoring applications. The proposed device, called XpertTrack, exchanges data through the GPRS protocol over a GSM network and monitories temperature and vibrations of critical merchandise during commercial shipments anywhere on the globe. Another feature of this real time tracker is to provide GPS and GSM positioning with a precision of 10 m or less. In order to interpret the condition of the merchandise, the data acquisition, analysis and visualization are done with 0.1 °C accuracy for the temperature sensor, and 10 levels of shock sensitivity for the acceleration sensor. In addition to this, the architecture allows increasing the number and the types of sensors, so that companies can use this flexible solution to monitor a large percentage of their fleet.

  10. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  11. Integration of car-body flexibility into train-track coupling system dynamics analysis

    NASA Astrophysics Data System (ADS)

    Ling, Liang; Zhang, Qing; Xiao, Xinbiao; Wen, Zefeng; Jin, Xuesong

    2018-04-01

    The resonance vibration of flexible car-bodies greatly affects the dynamics performances of high-speed trains. In this paper, we report a three-dimensional train-track model to capture the flexible vibration features of high-speed train carriages based on the flexible multi-body dynamics approach. The flexible car-body is modelled using both the finite element method (FEM) and the multi-body dynamics (MBD) approach, in which the rigid motions are obtained by using the MBD theory and the structure deformation is calculated by the FEM and the modal superposition method. The proposed model is applied to investigate the influence of the flexible vibration of car-bodies on the dynamics performances of train-track systems. The dynamics performances of a high-speed train running on a slab track, including the car-body vibration behaviour, the ride comfort, and the running safety, calculated by the numerical models with rigid and flexible car-bodies are compared in detail. The results show that the car-body flexibility not only significantly affects the vibration behaviour and ride comfort of rail carriages, but also can has an important influence on the running safety of trains. The rigid car-body model underestimates the vibration level and ride comfort of rail vehicles, and ignoring carriage torsional flexibility in the curving safety evaluation of trains is conservative.

  12. A comparison of kinesthetic-tactual and visual displays via a critical tracking task. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.

    1979-01-01

    The feasibility of using the critical tracking task to evaluate kinesthetic-tactual displays was examined. The test subjects were asked to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. The results indicate that the critical tracking task is both a feasible and a reliable methodology for assessing tactual tracking. Further, that the critical tracking methodology is as sensitive and valid a measure of tactual tracking as visual tracking is demonstrated by the approximately equal effects of quickening for the tactual and visual displays.

  13. Complex versus simple ankle movement training in stroke using telerehabilitation: a randomized controlled trial.

    PubMed

    Deng, Huiqiong; Durfee, William K; Nuckley, David J; Rheude, Brandon S; Severson, Amy E; Skluzacek, Katie M; Spindler, Kristen K; Davey, Cynthia S; Carey, James R

    2012-02-01

    Telerehabilitation allows rehabilitative training to continue remotely after discharge from acute care and can include complex tasks known to create rich conditions for neural change. The purposes of this study were: (1) to explore the feasibility of using telerehabilitation to improve ankle dorsiflexion during the swing phase of gait in people with stroke and (2) to compare complex versus simple movements of the ankle in promoting behavioral change and brain reorganization. This study was a pilot randomized controlled trial. Training was done in the participant's home. Testing was done in separate research labs involving functional magnetic resonance imaging (fMRI) and multi-camera gait analysis. Sixteen participants with chronic stroke and impaired ankle dorsiflexion were assigned randomly to receive 4 weeks of telerehabilitation of the paretic ankle. Participants received either computerized complex movement training (track group) or simple movement training (move group). Behavioral changes were measured with the 10-m walk test and gait analysis using a motion capture system. Brain reorganization was measured with ankle tracking during fMRI. Dorsiflexion during gait was significantly larger in the track group compared with the move group. For fMRI, although the volume, percent volume, and intensity of cortical activation failed to show significant changes, the frequency count of the number of participants showing an increase versus a decrease in these values from pretest to posttest measurements was significantly different between the 2 groups, with the track group decreasing and the move group increasing. Limitations of this study were that no follow-up test was conducted and that a small sample size was used. The results suggest that telerehabilitation, emphasizing complex task training with the paretic limb, is feasible and can be effective in promoting further dorsiflexion in people with chronic stroke.

  14. Experimental and Theoretical Results in Output-Trajectory Redesign for Flexible Structures

    NASA Technical Reports Server (NTRS)

    Dewey, J. S.; Devasia, Santosh

    1996-01-01

    In this paper we study the optimal redesign of output trajectory for linear invertible systems. This is particularly important for tracking control of flexible structures because the input-state trajectories that achieve the required output may cause excessive vibrations in the structure. A trade-off is then required between tracking and vibrations reduction. We pose and solve this problem as the minimization of a quadratic cost function. The theory is developed and applied to the output tracking of a flexible structure and experimental results are presented.

  15. Whole shaft visibility and mechanical performance for active MR catheters using copper-nitinol braided polymer tubes.

    PubMed

    Kocaturk, Ozgur; Saikus, Christina E; Guttman, Michael A; Faranesh, Anthony Z; Ratnayaka, Kanishka; Ozturk, Cengizhan; McVeigh, Elliot R; Lederman, Robert J

    2009-08-12

    Catheter visualization and tracking remains a challenge in interventional MR.Active guidewires can be made conspicuous in "profile" along their whole shaft exploiting metallic core wire and hypotube components that are intrinsic to their mechanical performance. Polymer-based catheters, on the other hand, offer no conductive medium to carry radio frequency waves. We developed a new "active" catheter design for interventional MR with mechanical performance resembling braided X-ray devices. Our 75 cm long hybrid catheter shaft incorporates a wire lattice in a polymer matrix, and contains three distal loop coils in a flexible and torquable 7Fr device. We explored the impact of braid material designs on radiofrequency and mechanical performance. The incorporation of copper wire into in a superelastic nitinol braided loopless antenna allowed good visualization of the whole shaft (70 cm) in vitro and in vivo in swine during real-time MR with 1.5 T scanner. Additional distal tip coils enhanced tip visibility. Increasing the copper:nitinol ratio in braiding configurations improved flexibility at the expense of torquability. We found a 16-wire braid of 1:1 copper:nitinol to have the optimum balance of mechanical (trackability, flexibility, torquability) and antenna (signal attenuation) properties. With this configuration, the temperature increase remained less than 2 degrees C during real-time MR within 10 cm horizontal from the isocenter. The design was conspicuous in vitro and in vivo. We have engineered a new loopless antenna configuration that imparts interventional MR catheters with satisfactory mechanical and imaging characteristics. This compact loopless antenna design can be generalized to visualize the whole shaft of any general-purpose polymer catheter to perform safe interventional procedures.

  16. Whole shaft visibility and mechanical performance for active MR catheters using copper-nitinol braided polymer tubes

    PubMed Central

    Kocaturk, Ozgur; Saikus, Christina E; Guttman, Michael A; Faranesh, Anthony Z; Ratnayaka, Kanishka; Ozturk, Cengizhan; McVeigh, Elliot R; Lederman, Robert J

    2009-01-01

    Background Catheter visualization and tracking remains a challenge in interventional MR. Active guidewires can be made conspicuous in "profile" along their whole shaft exploiting metallic core wire and hypotube components that are intrinsic to their mechanical performance. Polymer-based catheters, on the other hand, offer no conductive medium to carry radio frequency waves. We developed a new "active" catheter design for interventional MR with mechanical performance resembling braided X-ray devices. Our 75 cm long hybrid catheter shaft incorporates a wire lattice in a polymer matrix, and contains three distal loop coils in a flexible and torquable 7Fr device. We explored the impact of braid material designs on radiofrequency and mechanical performance. Results The incorporation of copper wire into in a superelastic nitinol braided loopless antenna allowed good visualization of the whole shaft (70 cm) in vitro and in vivo in swine during real-time MR with 1.5 T scanner. Additional distal tip coils enhanced tip visibility. Increasing the copper:nitinol ratio in braiding configurations improved flexibility at the expense of torquability. We found a 16-wire braid of 1:1 copper:nitinol to have the optimum balance of mechanical (trackability, flexibility, torquability) and antenna (signal attenuation) properties. With this configuration, the temperature increase remained less than 2°C during real-time MR within 10 cm horizontal from the isocenter. The design was conspicuous in vitro and in vivo. Conclusion We have engineered a new loopless antenna configuration that imparts interventional MR catheters with satisfactory mechanical and imaging characteristics. This compact loopless antenna design can be generalized to visualize the whole shaft of any general-purpose polymer catheter to perform safe interventional procedures. PMID:19674464

  17. Characterization of a multi-user indoor positioning system based on low cost depth vision (Kinect) for monitoring human activity in a smart home.

    PubMed

    Sevrin, Loïc; Noury, Norbert; Abouchi, Nacer; Jumel, Fabrice; Massot, Bertrand; Saraydaryan, Jacques

    2015-01-01

    An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security. Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle. This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect). A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems. The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy. This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community.

  18. MISR CMVs and Multiangular Views of Tropical Cyclone Inner-Core Dynamics

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.; Diner, David J.; Garay, Michael J; Jovanovic, Veljko M.; Lee, Jae N.; Moroney, Catherine M.; Mueller, Kevin J.; Nelson, David L.

    2010-01-01

    Multi-camera stereo imaging of cloud features from the MISR (Multiangle Imaging SpectroRadiometer) instrument on NASA's Terra satellite provides accurate and precise measurements of cloud top heights (CTH) and cloud motion vector (CMV) winds. MISR observes each cloudy scene from nine viewing angles (Nadir, +/-26(sup o), +/-46(sup o), +/-60(sup o), +/-70(sup o)) with approximatel 275-m pixel resolution. This paper provides an update on MISR CMV and CTH algorithm improvements, and explores a high-resolution retrieval of tangential winds inside the eyewall of tropical cyclones (TC). The MISR CMV and CTH retrievals from the updated algorithm are significantly improved in terms of spatial coverage and systematic errors. A new product, the 1.1-km cross-track wind, provides high accuracy and precision in measuring convective outflows. Preliminary results obtained from the 1.1-km tangential wind retrieval inside the TC eyewall show that the inner-core rotation is often faster near the eyewall, and this faster rotation appears to be related linearly to cyclone intensity.

  19. Automated Reconstruction of Three-Dimensional Fish Motion, Forces, and Torques

    PubMed Central

    Voesenek, Cees J.; Pieters, Remco P. M.; van Leeuwen, Johan L.

    2016-01-01

    Fish can move freely through the water column and make complex three-dimensional motions to explore their environment, escape or feed. Nevertheless, the majority of swimming studies is currently limited to two-dimensional analyses. Accurate experimental quantification of changes in body shape, position and orientation (swimming kinematics) in three dimensions is therefore essential to advance biomechanical research of fish swimming. Here, we present a validated method that automatically tracks a swimming fish in three dimensions from multi-camera high-speed video. We use an optimisation procedure to fit a parameterised, morphology-based fish model to each set of video images. This results in a time sequence of position, orientation and body curvature. We post-process this data to derive additional kinematic parameters (e.g. velocities, accelerations) and propose an inverse-dynamics method to compute the resultant forces and torques during swimming. The presented method for quantifying 3D fish motion paves the way for future analyses of swimming biomechanics. PMID:26752597

  20. Real-time vehicle matching for multi-camera tunnel surveillance

    NASA Astrophysics Data System (ADS)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  1. The flex track: flexible partitioning between low- and high-acuity areas of an emergency department.

    PubMed

    Laker, Lauren F; Froehle, Craig M; Lindsell, Christopher J; Ward, Michael J

    2014-12-01

    Emergency departments (EDs) with both low- and high-acuity treatment areas often have fixed allocation of resources, regardless of demand. We demonstrate the utility of discrete-event simulation to evaluate flexible partitioning between low- and high-acuity ED areas to identify the best operational strategy for subsequent implementation. A discrete-event simulation was used to model patient flow through a 50-bed, urban, teaching ED that handles 85,000 patient visits annually. The ED has historically allocated 10 beds to a fast track for low-acuity patients. We estimated the effect of a flex track policy, which involved switching up to 5 of these fast track beds to serving both low- and high-acuity patients, on patient waiting times. When the high-acuity beds were not at capacity, low-acuity patients were given priority access to flexible beds. Otherwise, high-acuity patients were given priority access to flexible beds. Wait times were estimated for patients by disposition and Emergency Severity Index score. A flex track policy using 3 flexible beds produced the lowest mean patient waiting time of 30.9 minutes (95% confidence interval [CI] 30.6 to 31.2 minutes). The typical fast track approach of rigidly separating high- and low-acuity beds produced a mean patient wait time of 40.6 minutes (95% CI 40.2 to 50.0 minutes), 31% higher than that of the 3-bed flex track. A completely flexible ED, in which all beds can accommodate any patient, produced mean wait times of 35.1 minutes (95% CI 34.8 to 35.4 minutes). The results from the 3-bed flex track scenario were robust, performing well across a range of scenarios involving higher and lower patient volumes and care durations. Using discrete-event simulation, we have shown that adding some flexibility into bed allocation between low and high acuity can provide substantial reductions in overall patient waiting and a more efficient ED. Copyright © 2014 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  2. Vision-based sensing for autonomous in-flight refueling

    NASA Astrophysics Data System (ADS)

    Scott, D.; Toal, M.; Dale, J.

    2007-04-01

    A significant capability of unmanned airborne vehicles (UAV's) is that they can operate tirelessly and at maximum efficiency in comparison to their human pilot counterparts. However a major limiting factor preventing ultra-long endurance missions is that they require landing to refuel. Development effort has been directed to allow UAV's to automatically refuel in the air using current refueling systems and procedures. The 'hose & drogue' refueling system was targeted as it is considered the more difficult case. Recent flight trials resulted in the first-ever fully autonomous airborne refueling operation. Development has gone into precision GPS-based navigation sensors to maneuver the aircraft into the station-keeping position and onwards to dock with the refueling drogue. However in the terminal phases of docking, the accuracy of the GPS is operating at its performance limit and also disturbance factors on the flexible hose and basket are not predictable using an open-loop model. Hence there is significant uncertainty on the position of the refueling drogue relative to the aircraft, and is insufficient in practical operation to achieve a successful and safe docking. A solution is to augment the GPS based system with a vision-based sensor component through the terminal phase to visually acquire and track the drogue in 3D space. The higher bandwidth and resolution of camera sensors gives significantly better estimates on the state of the drogue position. Disturbances in the actual drogue position caused by subtle aircraft maneuvers and wind gusting can be visually tracked and compensated for, providing an accurate estimate. This paper discusses the issues involved in visually detecting a refueling drogue, selecting an optimum camera viewpoint, and acquiring and tracking the drogue throughout a widely varying operating range and conditions.

  3. Visual attention is required for multiple object tracking.

    PubMed

    Tran, Annie; Hoffman, James E

    2016-12-01

    In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology.

    PubMed

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-08-25

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40-50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production.

  5. Multi-camera digital image correlation method with distributed fields of view

    NASA Astrophysics Data System (ADS)

    Malowany, Krzysztof; Malesa, Marcin; Kowaluk, Tomasz; Kujawinska, Malgorzata

    2017-11-01

    A multi-camera digital image correlation (DIC) method and system for measurements of large engineering objects with distributed, non-overlapping areas of interest are described. The data obtained with individual 3D DIC systems are stitched by an algorithm which utilizes the positions of fiducial markers determined simultaneously by Stereo-DIC units and laser tracker. The proposed calibration method enables reliable determination of transformations between local (3D DIC) and global coordinate systems. The applicability of the method was proven during in-situ measurements of a hall made of arch-shaped (18 m span) self-supporting metal-plates. The proposed method is highly recommended for 3D measurements of shape and displacements of large and complex engineering objects made from multiple directions and it provides the suitable accuracy of data for further advanced structural integrity analysis of such objects.

  6. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    NASA Astrophysics Data System (ADS)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  7. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    PubMed

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  8. Homography-based multiple-camera person-tracking

    NASA Astrophysics Data System (ADS)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of live targets for training. No calibration is required. Testing shows that the algorithm performs very well in real-world sequences. The consistent labelling problem is solved, even for targets that appear via in-scene entrances. Full occlusions are handled. Although implemented in Matlab, the multiple-camera tracking system runs at eight frames per second. A faster implementation would be suitable for real-world use at typical video frame rates.

  9. Dynamic modelling and adaptive robust tracking control of a space robot with two-link flexible manipulators under unknown disturbances

    NASA Astrophysics Data System (ADS)

    Yang, Xinxin; Ge, Shuzhi Sam; He, Wei

    2018-04-01

    In this paper, both the closed-form dynamics and adaptive robust tracking control of a space robot with two-link flexible manipulators under unknown disturbances are developed. The dynamic model of the system is described with assumed modes approach and Lagrangian method. The flexible manipulators are represented as Euler-Bernoulli beams. Based on singular perturbation technique, the displacements/joint angles and flexible modes are modelled as slow and fast variables, respectively. A sliding mode control is designed for trajectories tracking of the slow subsystem under unknown but bounded disturbances, and an adaptive sliding mode control is derived for slow subsystem under unknown slowly time-varying disturbances. An optimal linear quadratic regulator method is proposed for the fast subsystem to damp out the vibrations of the flexible manipulators. Theoretical analysis validates the stability of the proposed composite controller. Numerical simulation results demonstrate the performance of the closed-loop flexible space robot system.

  10. Multivariable manual control with simultaneous visual and auditory presentation of information. [for improved compensatory tracking performance of human operator

    NASA Technical Reports Server (NTRS)

    Uhlemann, H.; Geiser, G.

    1975-01-01

    Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.

  11. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology

    PubMed Central

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-01-01

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40–50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production. PMID:27571078

  12. How Visual Search Relates to Visual Diagnostic Performance: A Narrative Systematic Review of Eye-Tracking Research in Radiology

    ERIC Educational Resources Information Center

    van der Gijp, A.; Ravesloot, C. J.; Jarodzka, H.; van der Schaaf, M. F.; van der Schaaf, I. C.; van Schaik, J. P.; ten Cate, Th. J.

    2017-01-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology…

  13. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    PubMed

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  14. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    PubMed Central

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-01-01

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350

  15. VirtoScan - a mobile, low-cost photogrammetry setup for fast post-mortem 3D full-body documentations in x-ray computed tomography and autopsy suites.

    PubMed

    Kottner, Sören; Ebert, Lars C; Ampanozi, Garyfalia; Braun, Marcel; Thali, Michael J; Gascho, Dominic

    2017-03-01

    Injuries such as bite marks or boot prints can leave distinct patterns on the body's surface and can be used for 3D reconstructions. Although various systems for 3D surface imaging have been introduced in the forensic field, most techniques are both cost-intensive and time-consuming. In this article, we present the VirtoScan, a mobile, multi-camera rig based on close-range photogrammetry. The system can be integrated into automated PMCT scanning procedures or used manually together with lifting carts, autopsy tables and examination couch. The VirtoScan is based on a moveable frame that carries 7 digital single-lens reflex cameras. A remote control is attached to each camera and allows the simultaneous triggering of the shutter release of all cameras. Data acquisition in combination with the PMCT scanning procedures took 3:34 min for the 3D surface documentation of one side of the body compared to 20:20 min of acquisition time when using our in-house standard. A surface model comparison between the high resolution output from our in-house standard and a high resolution model from the multi-camera rig showed a mean surface deviation of 0.36 mm for the whole body scan and 0.13 mm for a second comparison of a detailed section of the scan. The use of the multi-camera rig reduces the acquisition time for whole-body surface documentations in medico-legal examinations and provides a low-cost 3D surface scanning alternative for forensic investigations.

  16. 78 FR 12825 - Petition for Extension of Waiver of Compliance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-25

    ... the frequency of the required visual track inspections. FRA issued the initial waiver that granted.... SEPTA requests an extension of approval to reduce the frequency of required, visual track inspections... with continuous welded rail. SEPTA proposes to conduct one visual track inspection per week, instead of...

  17. Technical Report of Successful Deployment of Tandem Visual Tracking During Live Laparoscopic Cholecystectomy Between Novice and Expert Surgeon.

    PubMed

    Puckett, Yana; Baronia, Benedicto C

    2016-09-20

    With the recent advances in eye tracking technology, it is now possible to track surgeons' eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices and developed techniques to assess surgical skill on the basis of eye movement utilizing simulators and live surgery. None have evaluated simultaneous visual tracking between an expert and a novice during live surgery. Here, we describe a successful simultaneous deployment of visual tracking of an expert and a novice during live laparoscopic cholecystectomy. One expert surgeon and one chief surgical resident at an accredited surgical program in Lubbock, TX, USA performed a live laparoscopic cholecystectomy while simultaneously wearing the visual tracking devices. Their visual attitudes and movements were monitored via video recordings. The recordings were then analyzed for correlation between the expert and the novice. The visual attitudes and movements correlated approximately 85% between an expert surgeon and a chief surgical resident. The surgery was carried out uneventfully, and the data was abstracted with ease. We conclude that simultaneous deployment of visual tracking during live laparoscopic surgery is a possibility. More studies and subjects are needed to verify the success of our results and obtain data analysis.

  18. Biases in rhythmic sensorimotor coordination: effects of modality and intentionality.

    PubMed

    Debats, Nienke B; Ridderikhoff, Arne; de Boer, Betteco J; Peper, C Lieke E

    2013-08-01

    Sensorimotor biases were examined for intentional (tracking task) and unintentional (distractor task) rhythmic coordination. The tracking task involved unimanual tracking of either an oscillating visual signal or the passive movements of the contralateral hand (proprioceptive signal). In both conditions the required coordination patterns (isodirectional and mirror-symmetric) were defined relative to the body midline and the hands were not visible. For proprioceptive tracking the two patterns did not differ in stability, whereas for visual tracking the isodirectional pattern was performed more stably than the mirror-symmetric pattern. However, when visual feedback about the unimanual hand movements was provided during visual tracking, the isodirectional pattern ceased to be dominant. Together these results indicated that the stability of the coordination patterns did not depend on the modality of the target signal per se, but on the combination of sensory signals that needed to be processed (unimodal vs. cross-modal). The distractor task entailed rhythmic unimanual movements during which a rhythmic visual or proprioceptive distractor signal had to be ignored. The observed biases were similar as for intentional coordination, suggesting that intentionality did not affect the underlying sensorimotor processes qualitatively. Intentional tracking was characterized by active sensory pursuit, through muscle activity in the passively moved arm (proprioceptive tracking task) and rhythmic eye movements (visual tracking task). Presumably this pursuit afforded predictive information serving the coordination process. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Evaluation of kinesthetic-tactual displays using a critical tracking task

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.; Ault, R. T.

    1977-01-01

    The study sought to investigate the feasibility of applying the critical tracking task paradigm to the evaluation of kinesthetic-tactual displays. Four subjects attempted to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. Display aiding was introduced in both modalities in the form of velocity quickening. Visual tracking performance was better than tactual tracking, and velocity aiding improved the critical tracking scores for visual and tactual tracking about equally. The results suggest that the critical task methodology holds considerable promise for evaluating kinesthetic-tactual displays.

  20. Encoding color information for visual tracking: Algorithms and benchmark.

    PubMed

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  1. Tracking real-time neural activation of conceptual knowledge using single-trial event-related potentials.

    PubMed

    Amsel, Ben D

    2011-04-01

    Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed understanding of the timecourse and intensity of influence of several such knowledge types on real-time neural activity. A linear mixed-effects model was applied to single trial event-related potentials for 207 visually presented concrete words measured on total number of features (semantic richness), imageability, and number of visual motion, color, visual form, smell, taste, sound, and function features. Significant influences of multiple feature types occurred before 200ms, suggesting parallel neural computation of word form and conceptual knowledge during language comprehension. Function and visual motion features most prominently influenced neural activity, underscoring the importance of action-related knowledge in computing word meaning. The dynamic time courses and topographies of these effects are most consistent with a flexible conceptual system wherein temporally dynamic recruitment of representations in modal and supramodal cortex are a crucial element of the constellation of processes constituting word meaning computation in the brain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. EMG-based visual-haptic biofeedback: a tool to improve motor control in children with primary dystonia.

    PubMed

    Casellato, Claudia; Pedrocchi, Alessandra; Zorzi, Giovanna; Vernisse, Lea; Ferrigno, Giancarlo; Nardocci, Nardo

    2013-05-01

    New insights suggest that dystonic motor impairments could also involve a deficit of sensory processing. In this framework, biofeedback, making covert physiological processes more overt, could be useful. The present work proposes an innovative integrated setup which provides the user with an electromyogram (EMG)-based visual-haptic biofeedback during upper limb movements (spiral tracking tasks), to test if augmented sensory feedbacks can induce motor control improvement in patients with primary dystonia. The ad hoc developed real-time control algorithm synchronizes the haptic loop with the EMG reading; the brachioradialis EMG values were used to modify visual and haptic features of the interface: the higher was the EMG level, the higher was the virtual table friction and the background color proportionally moved from green to red. From recordings on dystonic and healthy subjects, statistical results showed that biofeedback has a significant impact, correlated with the local impairment, on the dystonic muscular control. These tests pointed out the effectiveness of biofeedback paradigms in gaining a better specific-muscle voluntary motor control. The flexible tool developed here shows promising prospects of clinical applications and sensorimotor rehabilitation.

  3. Feedback tracking control for dynamic morphing of piezocomposite actuated flexible wings

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoming; Zhou, Wenya; Wu, Zhigang

    2018-03-01

    Aerodynamic properties of flexible wings can be improved via shape morphing using piezocomposite materials. Dynamic shape control of flexible wings is investigated in this study by considering the interactions between structural dynamics, unsteady aerodynamics and piezo-actuations. A novel antisymmetric angle-ply bimorph configuration of piezocomposite actuators is presented to realize coupled bending-torsional shape control. The active aeroelastic model is derived using finite element method and Theodorsen unsteady aerodynamic loads. A time-varying linear quadratic Gaussian (LQG) tracking control system is designed to enhance aerodynamic lift with pre-defined trajectories. Proof-of-concept simulations of static and dynamic shape control are presented for a scaled high-aspect-ratio wing model. Vibrations of the wing and fluctuations in aerodynamic forces are caused by using the static voltages directly in dynamic shape control. The lift response has tracked the trajectories well with favorable dynamic morphing performance via feedback tracking control.

  4. Decontaminate feature for tracking: adaptive tracking via evolutionary feature subset

    NASA Astrophysics Data System (ADS)

    Liu, Qiaoyuan; Wang, Yuru; Yin, Minghao; Ren, Jinchang; Li, Ruizhi

    2017-11-01

    Although various visual tracking algorithms have been proposed in the last 2-3 decades, it remains a challenging problem for effective tracking with fast motion, deformation, occlusion, etc. Under complex tracking conditions, most tracking models are not discriminative and adaptive enough. When the combined feature vectors are inputted to the visual models, this may lead to redundancy causing low efficiency and ambiguity causing poor performance. An effective tracking algorithm is proposed to decontaminate features for each video sequence adaptively, where the visual modeling is treated as an optimization problem from the perspective of evolution. Every feature vector is compared to a biological individual and then decontaminated via classical evolutionary algorithms. With the optimized subsets of features, the "curse of dimensionality" has been avoided while the accuracy of the visual model has been improved. The proposed algorithm has been tested on several publicly available datasets with various tracking challenges and benchmarked with a number of state-of-the-art approaches. The comprehensive experiments have demonstrated the efficacy of the proposed methodology.

  5. Refining the modelling of vehicle-track interaction

    NASA Astrophysics Data System (ADS)

    Kaiser, Ingo

    2012-01-01

    An enhanced model of a passenger coach running on a straight track is developed. This model includes wheelsets modelled as rotating flexible bodies, a track consisting of flexible rails supported on discrete sleepers and wheel-rail contact modules, which can describe non-elliptic contact patches based on a boundary element method (BEM). For the scenarios of undisturbed centred running and permanent hunting, the impact of the structural deformations of the wheelsets and the rails on the stress distribution in the wheel-rail contact is investigated.

  6. Self-paced model learning for robust visual tracking

    NASA Astrophysics Data System (ADS)

    Huang, Wenhui; Gu, Jason; Ma, Xin; Li, Yibin

    2017-01-01

    In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.

  7. A persistent homology approach to collective behavior in insect swarms

    NASA Astrophysics Data System (ADS)

    Sinhuber, Michael; Ouellette, Nicholas T.

    Various animals from birds and fish to insects tend to form aggregates, displaying self-organized collective swarming behavior. Due to their frequent occurrence in nature and their implications for engineered, collective systems, these systems have been investigated and modeled thoroughly for decades. Common approaches range from modeling them with coupled differential equations on the individual level up to continuum approaches. We present an alternative, topology-based approach for describing swarming behavior at the macroscale rather than the microscale. We study laboratory swarms of Chironomus riparius, a flying, non-biting midge. To obtain the time-resolved three-dimensional trajectories of individual insects, we use a multi-camera stereoimaging and particle-tracking setup. To investigate the swarming behavior in a topological sense, we employ a persistent homology approach to identify persisting structures and features in the insect swarm that elude a direct, ensemble-averaging approach. We are able to identify features of sub-clusters in the swarm that show behavior distinct from that of the remaining swarm members. The coexistence of sub-swarms with different features resembles some non-biological systems such as active colloids or even thermodynamic systems.

  8. Do Multielement Visual Tracking and Visual Search Draw Continuously on the Same Visual Attention Resources?

    ERIC Educational Resources Information Center

    Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.

    2005-01-01

    Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…

  9. Technical Report of Successful Deployment of Tandem Visual Tracking During Live Laparoscopic Cholecystectomy Between Novice and Expert Surgeon

    PubMed Central

    Baronia, Benedicto C

    2016-01-01

    With the recent advances in eye tracking technology, it is now possible to track surgeons’ eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices and developed techniques to assess surgical skill on the basis of eye movement utilizing simulators and live surgery. None have evaluated simultaneous visual tracking between an expert and a novice during live surgery. Here, we describe a successful simultaneous deployment of visual tracking of an expert and a novice during live laparoscopic cholecystectomy. One expert surgeon and one chief surgical resident at an accredited surgical program in Lubbock, TX, USA performed a live laparoscopic cholecystectomy while simultaneously wearing the visual tracking devices. Their visual attitudes and movements were monitored via video recordings. The recordings were then analyzed for correlation between the expert and the novice. The visual attitudes and movements correlated approximately 85% between an expert surgeon and a chief surgical resident. The surgery was carried out uneventfully, and the data was abstracted with ease. We conclude that simultaneous deployment of visual tracking during live laparoscopic surgery is a possibility. More studies and subjects are needed to verify the success of our results and obtain data analysis. PMID:27774359

  10. Intelligent Control of Flexible-Joint Robotic Manipulators

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Gallegos, G.

    1997-01-01

    This paper considers the trajectory tracking problem for uncertain rigid-link. flexible.joint manipulators, and presents a new intelligent controller as a solution to this problem. The proposed control strategy is simple and computationally efficient, requires little information concerning either the manipulator or actuator/transmission models and ensures uniform boundedness of all signals and arbitrarily accurate task-space trajectory tracking.

  11. Deployment/retraction ground testing of a large flexible solar array

    NASA Technical Reports Server (NTRS)

    Chung, D. T.

    1982-01-01

    The simulated zero-gravity ground testing of the flexible fold-up solar array consisting of eighty-four full-size panels (.368 m x .4 m each) is addressed. Automatic, hands-off extension, retraction, and lockup operations are included. Three methods of ground testing were investigated: (1) vertical testing; (2) horizontal testing, using an overhead water trough to support the panels; and (3) horizontal testing, using an overhead track in conjunction with a counterweight system to support the panels. Method 3 was selected as baseline. The wing/assembly vertical support structure, the five-tier overhead track, and the mast-element support track comprise the test structure. The flexible solar array wing assembly was successfully extended and retracted numerous times under simulated zero-gravity conditions.

  12. The Integrated Waste Tracking System - A Flexible Waste Management Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert Stephen

    2001-02-01

    The US Department of Energy (DOE) Idaho National Engineering and Environmental Laboratory (INEEL) has fully embraced a flexible, computer-based tool to help increase waste management efficiency and integrate multiple operational functions from waste generation through waste disposition while reducing cost. The Integrated Waste Tracking System (IWTS)provides comprehensive information management for containerized waste during generation,storage, treatment, transport, and disposal. The IWTS provides all information necessary for facilities to properly manage and demonstrate regulatory compliance. As a platformindependent, client-server and Web-based inventory and compliance system, the IWTS has proven to be a successful tracking, characterization, compliance, and reporting tool that meets themore » needs of both operations and management while providing a high level of management flexibility.« less

  13. Adaptive Jacobian Fuzzy Attitude Control for Flexible Spacecraft Combined Attitude and Sun Tracking System

    NASA Astrophysics Data System (ADS)

    Chak, Yew-Chung; Varatharajoo, Renuganth

    2016-07-01

    Many spacecraft attitude control systems today use reaction wheels to deliver precise torques to achieve three-axis attitude stabilization. However, irrecoverable mechanical failure of reaction wheels could potentially lead to mission interruption or total loss. The electrically-powered Solar Array Drive Assemblies (SADA) are usually installed in the pitch axis which rotate the solar arrays to track the Sun, can produce torques to compensate for the pitch-axis wheel failure. In addition, the attitude control of a flexible spacecraft poses a difficult problem. These difficulties include the strong nonlinear coupled dynamics between the rigid hub and flexible solar arrays, and the imprecisely known system parameters, such as inertia matrix, damping ratios, and flexible mode frequencies. In order to overcome these drawbacks, the adaptive Jacobian tracking fuzzy control is proposed for the combined attitude and sun-tracking control problem of a flexible spacecraft during attitude maneuvers in this work. For the adaptation of kinematic and dynamic uncertainties, the proposed scheme uses an adaptive sliding vector based on estimated attitude velocity via approximate Jacobian matrix. The unknown nonlinearities are approximated by deriving the fuzzy models with a set of linguistic If-Then rules using the idea of sector nonlinearity and local approximation in fuzzy partition spaces. The uncertain parameters of the estimated nonlinearities and the Jacobian matrix are being adjusted online by an adaptive law to realize feedback control. The attitude of the spacecraft can be directly controlled with the Jacobian feedback control when the attitude pointing trajectory is designed with respect to the spacecraft coordinate frame itself. A significant feature of this work is that the proposed adaptive Jacobian tracking scheme will result in not only the convergence of angular position and angular velocity tracking errors, but also the convergence of estimated angular velocity to the actual angular velocity. Numerical results are presented to demonstrate the effectiveness of the proposed scheme in tracking the desired attitude, as well as suppressing the elastic deflection effects of solar arrays during maneuver.

  14. Complex Versus Simple Ankle Movement Training in Stroke Using Telerehabilitation: A Randomized Controlled Trial

    PubMed Central

    Deng, Huiqiong; Durfee, William K.; Nuckley, David J.; Rheude, Brandon S.; Severson, Amy E.; Skluzacek, Katie M.; Spindler, Kristen K.; Davey, Cynthia S.

    2012-01-01

    Background Telerehabilitation allows rehabilitative training to continue remotely after discharge from acute care and can include complex tasks known to create rich conditions for neural change. Objectives The purposes of this study were: (1) to explore the feasibility of using telerehabilitation to improve ankle dorsiflexion during the swing phase of gait in people with stroke and (2) to compare complex versus simple movements of the ankle in promoting behavioral change and brain reorganization. Design This study was a pilot randomized controlled trial. Setting Training was done in the participant's home. Testing was done in separate research labs involving functional magnetic resonance imaging (fMRI) and multi-camera gait analysis. Patients Sixteen participants with chronic stroke and impaired ankle dorsiflexion were assigned randomly to receive 4 weeks of telerehabilitation of the paretic ankle. Intervention Participants received either computerized complex movement training (track group) or simple movement training (move group). Measurements Behavioral changes were measured with the 10-m walk test and gait analysis using a motion capture system. Brain reorganization was measured with ankle tracking during fMRI. Results Dorsiflexion during gait was significantly larger in the track group compared with the move group. For fMRI, although the volume, percent volume, and intensity of cortical activation failed to show significant changes, the frequency count of the number of participants showing an increase versus a decrease in these values from pretest to posttest measurements was significantly different between the 2 groups, with the track group decreasing and the move group increasing. Limitations Limitations of this study were that no follow-up test was conducted and that a small sample size was used. Conclusions The results suggest that telerehabilitation, emphasizing complex task training with the paretic limb, is feasible and can be effective in promoting further dorsiflexion in people with chronic stroke. PMID:22095209

  15. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    NASA Astrophysics Data System (ADS)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  16. Assessing instrument handling and operative consequences simultaneously: a simple method for creating synced multicamera videos for endosurgical or microsurgical skills assessments.

    PubMed

    Jabbour, Noel; Sidman, James

    2011-10-01

    There has been an increasing interest in assessment of technical skills in most medical and surgical disciplines. Many of these assessments involve microscopy or endoscopy and are thus amenable to video recording for post hoc review. An ideal skills assessment video would provide the reviewer with a simultaneous view of the examinee's instrument handling and the operative field. Ideally, a reviewer should be blinded to the identity of the examinee and whether the assessment was performed as a pretest or posttest examination, when given in conjunction with an educational intervention. We describe a simple method for reliably creating deidentified, multicamera, time-synced videos, which may be used in technical skills assessments. We pilot tested this method in a pediatric airway endoscopy Objective Assessment of Technical Skills (OSATS). Total video length was compared with the OSATS administration time. Thirty-nine OSATS were administered. There were no errors encountered in time-syncing the videos using this method. Mean duration of OSATS videos was 11 minutes and 20 seconds, which was significantly less than the time needed for an expert to be present at the administration of each 30-minute OSATS (P < 0.001). The described method for creating time-synced, multicamera skills assessment videos is reliable and may be used in endosurgical or microsurgical skills assessments. Compared with live review, post hoc video review using this method can save valuable expert reviewer time. Most importantly, this method allows a reviewer to simultaneously evaluate an examinee's instrument handling and the operative field while being blinded to the examinee's identity and timing of examination administration.

  17. Eye-Tracking in the Study of Visual Expertise: Methodology and Approaches in Medicine

    ERIC Educational Resources Information Center

    Fox, Sharon E.; Faulkner-Jones, Beverly E.

    2017-01-01

    Eye-tracking is the measurement of eye motions and point of gaze of a viewer. Advances in this technology have been essential to our understanding of many forms of visual learning, including the development of visual expertise. In recent years, these studies have been extended to the medical professions, where eye-tracking technology has helped us…

  18. Fast Deep Tracking via Semi-Online Domain Adaptation

    NASA Astrophysics Data System (ADS)

    Li, Xiaoping; Luo, Wenbing; Zhu, Yi; Li, Hanxi; Wang, Mingwen

    2018-04-01

    Deep tracking has been illustrating overwhelming superiorities over the shallow methods. Unfortunately, it also suffers from low FPS rates. To alleviate the problem, a number of real-time deep trackers have been proposed via removing the online updating procedure on the CNN model. However, the absent of the online update leads to a significant drop on tracking accuracy. In this work, we propose to perform the domain adaptation for visual tracking in two stages for transferring the information from the visual tracking domain and the instance domain respectively. In this way, the proposed visual tracker achieves comparable tracking accuracy to the state-of-the-art trackers and runs at real-time speed on an average consuming GPU.

  19. 49 CFR 214.509 - Required visual illumination and reflective devices for new on-track roadway maintenance machines.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... devices for new on-track roadway maintenance machines. 214.509 Section 214.509 Transportation Other... TRANSPORTATION RAILROAD WORKPLACE SAFETY On-Track Roadway Maintenance Machines and Hi-Rail Vehicles § 214.509 Required visual illumination and reflective devices for new on-track roadway maintenance machines. Each new...

  20. VisFlow - Web-based Visualization Framework for Tabular Data with a Subset Flow Model.

    PubMed

    Yu, Bowen; Silva, Claudio T

    2017-01-01

    Data flow systems allow the user to design a flow diagram that specifies the relations between system components which process, filter or visually present the data. Visualization systems may benefit from user-defined data flows as an analysis typically consists of rendering multiple plots on demand and performing different types of interactive queries across coordinated views. In this paper, we propose VisFlow, a web-based visualization framework for tabular data that employs a specific type of data flow model called the subset flow model. VisFlow focuses on interactive queries within the data flow, overcoming the limitation of interactivity from past computational data flow systems. In particular, VisFlow applies embedded visualizations and supports interactive selections, brushing and linking within a visualization-oriented data flow. The model requires all data transmitted by the flow to be a data item subset (i.e. groups of table rows) of some original input table, so that rendering properties can be assigned to the subset unambiguously for tracking and comparison. VisFlow features the analysis flexibility of a flow diagram, and at the same time reduces the diagram complexity and improves usability. We demonstrate the capability of VisFlow on two case studies with domain experts on real-world datasets showing that VisFlow is capable of accomplishing a considerable set of visualization and analysis tasks. The VisFlow system is available as open source on GitHub.

  1. Multi-Task Learning with Low Rank Attribute Embedding for Multi-Camera Person Re-Identification.

    PubMed

    Su, Chi; Yang, Fan; Zhang, Shiliang; Tian, Qi; Davis, Larry Steven; Gao, Wen

    2018-05-01

    We propose Multi-Task Learning with Low Rank Attribute Embedding (MTL-LORAE) to address the problem of person re-identification on multi-cameras. Re-identifications on different cameras are considered as related tasks, which allows the shared information among different tasks to be explored to improve the re-identification accuracy. The MTL-LORAE framework integrates low-level features with mid-level attributes as the descriptions for persons. To improve the accuracy of such description, we introduce the low-rank attribute embedding, which maps original binary attributes into a continuous space utilizing the correlative relationship between each pair of attributes. In this way, inaccurate attributes are rectified and missing attributes are recovered. The resulting objective function is constructed with an attribute embedding error and a quadratic loss concerning class labels. It is solved by an alternating optimization strategy. The proposed MTL-LORAE is tested on four datasets and is validated to outperform the existing methods with significant margins.

  2. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    PubMed

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  3. A Bilateral Advantage for Storage in Visual Working Memory

    ERIC Educational Resources Information Center

    Umemoto, Akina; Drew, Trafton; Ester, Edward F.; Awh, Edward

    2010-01-01

    Various studies have demonstrated enhanced visual processing when information is presented across both visual hemifields rather than in a single hemifield (the "bilateral advantage"). For example, Alvarez and Cavanagh (2005) reported that observers were able to track twice as many moving visual stimuli when the tracked items were presented…

  4. Multi-modal information processing for visual workload relief

    NASA Technical Reports Server (NTRS)

    Burke, M. W.; Gilson, R. D.; Jagacinski, R. J.

    1980-01-01

    The simultaneous performance of two single-dimensional compensatory tracking tasks, one with the left hand and one with the right hand, is discussed. The tracking performed with the left hand was considered the primary task and was performed with a visual display or a quickened kinesthetic-tactual (KT) display. The right-handed tracking was considered the secondary task and was carried out only with a visual display. Although the two primary task displays had afforded equivalent performance in a critical tracking task performed alone, in the dual-task situation the quickened KT primary display resulted in superior secondary visual task performance. Comparisons of various combinations of primary and secondary visual displays in integrated or separated formats indicate that the superiority of the quickened KT display is not simply due to the elimination of visual scanning. Additional testing indicated that quickening per se also is not the immediate cause of the observed KT superiority.

  5. Control of a flexible bracing manipulator: Integration of current research work to realize the bracing manipulator

    NASA Technical Reports Server (NTRS)

    Kwon, Dong-Soo

    1991-01-01

    All research results about flexible manipulator control were integrated to show a control scenario of a bracing manipulator. First, dynamic analysis of a flexible manipulator was done for modeling. Second, from the dynamic model, the inverse dynamic equation was derived, and the time domain inverse dynamic method was proposed for the calculation of the feedforward torque and the desired flexible coordinate trajectories. Third, a tracking controller was designed by combining the inverse dynamic feedforward control with the joint feedback control. The control scheme was applied to the tip position control of a single link flexible manipulator for zero and non-zero initial condition cases. Finally, the contact control scheme was added to the position tracking control. A control scenario of a bracing manipulator is provided and evaluated through simulation and experiment on a single link flexible manipulator.

  6. Teleoperation of steerable flexible needles by combining kinesthetic and vibratory feedback.

    PubMed

    Pacchierotti, Claudio; Abayazid, Momen; Misra, Sarthak; Prattichizzo, Domenico

    2014-01-01

    Needle insertion in soft-tissue is a minimally invasive surgical procedure that demands high accuracy. In this respect, robotic systems with autonomous control algorithms have been exploited as the main tool to achieve high accuracy and reliability. However, for reasons of safety and responsibility, autonomous robotic control is often not desirable. Therefore, it is necessary to focus also on techniques enabling clinicians to directly control the motion of the surgical tools. In this work, we address that challenge and present a novel teleoperated robotic system able to steer flexible needles. The proposed system tracks the position of the needle using an ultrasound imaging system and computes needle's ideal position and orientation to reach a given target. The master haptic interface then provides the clinician with mixed kinesthetic-vibratory navigation cues to guide the needle toward the computed ideal position and orientation. Twenty participants carried out an experiment of teleoperated needle insertion into a soft-tissue phantom, considering four different experimental conditions. Participants were provided with either mixed kinesthetic-vibratory feedback or mixed kinesthetic-visual feedback. Moreover, we considered two different ways of computing ideal position and orientation of the needle: with or without set-points. Vibratory feedback was found more effective than visual feedback in conveying navigation cues, with a mean targeting error of 0.72 mm when using set-points, and of 1.10 mm without set-points.

  7. Within-Hemifield Competition in Early Visual Areas Limits the Ability to Track Multiple Objects with Attention

    PubMed Central

    Alvarez, George A.; Cavanagh, Patrick

    2014-01-01

    It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651

  8. Structural dynamic interaction with solar tracking control for evolutionary Space Station concepts

    NASA Technical Reports Server (NTRS)

    Lim, Tae W.; Cooper, Paul A.; Ayers, J. Kirk

    1992-01-01

    The sun tracking control system design of the Solar Alpha Rotary Joint (SARJ) and the interaction of the control system with the flexible structure of Space Station Freedom (SSF) evolutionary concepts are addressed. The significant components of the space station pertaining to the SARJ control are described and the tracking control system design is presented. Finite element models representing two evolutionary concepts, enhanced operations capability (EOC) and extended operations capability (XOC), are employed to evaluate the influence of low frequency flexible structure on the control system design and performance. The design variables of the control system are synthesized using a constrained optimization technique to meet design requirements, to provide a given level of control system stability margin, and to achieve the most responsive tracking performance. The resulting SARJ control system design and performance of the EOC and XOC configurations are presented and compared to those of the SSF configuration. Performance limitations caused by the low frequency of the dominant flexible mode are discussed.

  9. Automatically processed alpha-track radon monitor

    DOEpatents

    Langner, Jr., G. Harold

    1993-01-01

    An automatically processed alpha-track radon monitor is provided which includes a housing having an aperture allowing radon entry, and a filter that excludes the entry of radon daughters into the housing. A flexible track registration material is located within the housing that records alpha-particle emissions from the decay of radon and radon daughters inside the housing. The flexible track registration material is capable of being spliced such that the registration material from a plurality of monitors can be spliced into a single strip to facilitate automatic processing of the registration material from the plurality of monitors. A process for the automatic counting of radon registered by a radon monitor is also provided.

  10. Automatically processed alpha-track radon monitor

    DOEpatents

    Langner, G.H. Jr.

    1993-01-12

    An automatically processed alpha-track radon monitor is provided which includes a housing having an aperture allowing radon entry, and a filter that excludes the entry of radon daughters into the housing. A flexible track registration material is located within the housing that records alpha-particle emissions from the decay of radon and radon daughters inside the housing. The flexible track registration material is capable of being spliced such that the registration material from a plurality of monitors can be spliced into a single strip to facilitate automatic processing of the registration material from the plurality of monitors. A process for the automatic counting of radon registered by a radon monitor is also provided.

  11. Emerging fiber optic endomicroscopy technologies towards noninvasive real-time visualization of histology in situ

    NASA Astrophysics Data System (ADS)

    Xi, Jiefeng; Zhang, Yuying; Huo, Li; Chen, Yongping; Jabbour, Toufic; Li, Ming-Jun; Li, Xingde

    2010-09-01

    This paper reviews our recent developments of ultrathin fiber-optic endomicroscopy technologies for transforming high-resolution noninvasive optical imaging techniques to in vivo and clinical applications such as early disease detection and guidance of interventions. Specifically we describe an all-fiber-optic scanning endomicroscopy technology, which miniaturizes a conventional bench-top scanning laser microscope down to a flexible fiber-optic probe of a small footprint (i.e. ~2-2.5 mm in diameter), capable of performing two-photon fluorescence and second harmonic generation microscopy in real time. This technology aims to enable realtime visualization of histology in situ without the need for tissue removal. We will also present a balloon OCT endoscopy technology which permits high-resolution 3D imaging of the entire esophagus for detection of neoplasia, guidance of biopsy and assessment of therapeutic outcome. In addition we will discuss the development of functional polymeric fluorescent nanocapsules, which use only FAD approved materials and potentially enable fast track clinical translation of optical molecular imaging and targeted therapy.

  12. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke

    PubMed Central

    2011-01-01

    Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated with controlling an affected arm make the motor system more prone to slack when distracted. Providing an alternate sensory channel for feedback, i.e., auditory feedback of tracking error, enabled the participants to simultaneously perform the tracking task and distracter task effectively. Thus, incorporating real-time auditory feedback of performance errors might improve clinical outcomes of robotic therapy systems. PMID:21513561

  13. Kinematic performance of fine motor control in attention-deficit/hyperactivity disorder: the effects of comorbid developmental coordination disorder and core symptoms.

    PubMed

    Lee, I-Ching; Chen, Yung-Jung; Tsai, Chin-Liang

    2013-02-01

    The aims of this study were: (i) to determine whether differences exist in the fine motor fluency and flexibility of three groups (children with attention-deficit/hyperactivity disorder [ADHD], children in whom ADHD is comorbid with developmental coordination disorder [DCD] [denoted as ADHD+DCD], and a typically developing control group); and (ii) to clarify whether the degree of severity of core symptoms affects performance. The Peabody Picture Vocabulary Test-Revised, the Beery-Buktenica Development Test of Visual-Motor Integration and the Movement Assessment Battery for Children were used as prescreening tests. The Integrated Visual and Auditory+Plus test was utilized to assess subjects' attention. The redesigned fine motor tracking and pursuit tasks were administered to evaluate subjects' fine motor performance. No significant difference was found when comparing the performance of the Children with ADHD and the typically developing group. Significant differences existed between children in whom ADHD is comorbid with DCD and typically developing children. Children with ADHD demonstrated proper fine motor fluency and flexibility, and deficient performance occurred when ADHD was comorbid with developmental coordination disorder. Children with ADHD had more difficulty implementing closed-loop movements that required higher levels of cognitive processing than those of their typically developing peers. Also, deficits in fine motor control were more pronounced when ADHD was combined with movement coordination problems. The severity of core symptoms had a greater effect on children with ADHD's fine motor flexibility than did fluency performance. In children with pure ADHD, unsmooth movement performance was highly related to the severity of core symptoms. © 2012 The Authors. Pediatrics International © 2012 Japan Pediatric Society.

  14. High-bandwidth and flexible tracking control for precision motion with application to a piezo nanopositioner.

    PubMed

    Feng, Zhao; Ling, Jie; Ming, Min; Xiao, Xiao-Hui

    2017-08-01

    For precision motion, high-bandwidth and flexible tracking are the two important issues for significant performance improvement. Iterative learning control (ILC) is an effective feedforward control method only for systems that operate strictly repetitively. Although projection ILC can track varying references, the performance is still limited by the fixed-bandwidth Q-filter, especially for triangular waves tracking commonly used in a piezo nanopositioner. In this paper, a wavelet transform-based linear time-varying (LTV) Q-filter design for projection ILC is proposed to compensate high-frequency errors and improve the ability to tracking varying references simultaneously. The LVT Q-filter is designed based on the modulus maximum of wavelet detail coefficients calculated by wavelet transform to determine the high-frequency locations of each iteration with the advantages of avoiding cross-terms and segmenting manually. The proposed approach was verified on a piezo nanopositioner. Experimental results indicate that the proposed approach can locate the high-frequency regions accurately and achieve the best performance under varying references compared with traditional frequency-domain and projection ILC with a fixed-bandwidth Q-filter, which validates that through implementing the LTV filter on projection ILC, high-bandwidth and flexible tracking can be achieved simultaneously by the proposed approach.

  15. Phytotracker, an information management system for easy recording and tracking of plants, seeds and plasmids

    PubMed Central

    2012-01-01

    Background A large number of different plant lines are produced and maintained in a typical plant research laboratory, both as seed stocks and in active growth. These collections need careful and consistent management to track and maintain them properly, and this is a particularly pressing issue in laboratories undertaking research involving genetic manipulation due to regulatory requirements. Researchers and PIs need to access these data and collections, and therefore an easy-to-use plant-oriented laboratory information management system that implements, maintains and displays the information in a simple and visual format would be of great help in both the daily work in the lab and in ensuring regulatory compliance. Results Here, we introduce ‘Phytotracker’, a laboratory management system designed specifically to organise and track plasmids, seeds and growing plants that can be used in mixed platform environments. Phytotracker is designed with simplicity of user operation and ease of installation and management as the major factor, whilst providing tracking tools that cover the full range of activities in molecular genetics labs. It utilises the cross-platform Filemaker relational database, which allows it to be run as a stand-alone or as a server-based networked solution available across all workstations in a lab that can be internet accessible if desired. It can also be readily modified or customised further. Phytotracker provides cataloguing and search functions for plasmids, seed batches, seed stocks and plants growing in pots or trays, and allows tracking of each plant from seed sowing, through harvest to the new seed batch and can print appropriate labels at each stage. The system enters seed information as it is transferred from the previous harvest data, and allows both selfing and hybridization (crossing) to be defined and tracked. Transgenic lines can be linked to their plasmid DNA source. This ease of use and flexibility helps users to reduce their time needed to organise their plants, seeds and plasmids and to maintain laboratory continuity involving multiple workers. Conclusion We have developed and used Phytotracker for over five years and have found it has been an intuitive, powerful and flexible research tool in organising our plasmid, seed and plant collections requiring minimal maintenance and training for users. It has been developed in an Arabidopsis molecular genetics environment, but can be readily adapted for almost any plant laboratory research. PMID:23062011

  16. Phytotracker, an information management system for easy recording and tracking of plants, seeds and plasmids.

    PubMed

    Nieuwland, Jeroen; Sornay, Emily; Marchbank, Angela; de Graaf, Barend Hj; Murray, James Ah

    2012-10-13

    A large number of different plant lines are produced and maintained in a typical plant research laboratory, both as seed stocks and in active growth. These collections need careful and consistent management to track and maintain them properly, and this is a particularly pressing issue in laboratories undertaking research involving genetic manipulation due to regulatory requirements. Researchers and PIs need to access these data and collections, and therefore an easy-to-use plant-oriented laboratory information management system that implements, maintains and displays the information in a simple and visual format would be of great help in both the daily work in the lab and in ensuring regulatory compliance. Here, we introduce 'Phytotracker', a laboratory management system designed specifically to organise and track plasmids, seeds and growing plants that can be used in mixed platform environments. Phytotracker is designed with simplicity of user operation and ease of installation and management as the major factor, whilst providing tracking tools that cover the full range of activities in molecular genetics labs. It utilises the cross-platform Filemaker relational database, which allows it to be run as a stand-alone or as a server-based networked solution available across all workstations in a lab that can be internet accessible if desired. It can also be readily modified or customised further. Phytotracker provides cataloguing and search functions for plasmids, seed batches, seed stocks and plants growing in pots or trays, and allows tracking of each plant from seed sowing, through harvest to the new seed batch and can print appropriate labels at each stage. The system enters seed information as it is transferred from the previous harvest data, and allows both selfing and hybridization (crossing) to be defined and tracked. Transgenic lines can be linked to their plasmid DNA source. This ease of use and flexibility helps users to reduce their time needed to organise their plants, seeds and plasmids and to maintain laboratory continuity involving multiple workers. We have developed and used Phytotracker for over five years and have found it has been an intuitive, powerful and flexible research tool in organising our plasmid, seed and plant collections requiring minimal maintenance and training for users. It has been developed in an Arabidopsis molecular genetics environment, but can be readily adapted for almost any plant laboratory research.

  17. Feature-based interference from unattended visual field during attentional tracking in younger and older adults.

    PubMed

    Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman

    2011-02-01

    The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall.

  18. Development of Flexible Visual Recognition Memory in Human Infants

    ERIC Educational Resources Information Center

    Robinson, Astri J.; Pascalis, Olivier

    2004-01-01

    Research using the visual paired comparison task has shown that visual recognition memory across changing contexts is dependent on the integrity of the hippocampal formation in human adults and in monkeys. The acquisition of contextual flexibility may contribute to the change in memory performance that occurs late in the first year of life. To…

  19. Human image tracking technique applied to remote collaborative environments

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Suzuki, Gen

    1993-10-01

    To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.

  20. A magnetic tether system to investigate visual and olfactory mediated flight control in Drosophila.

    PubMed

    Duistermars, Brian J; Frye, Mark

    2008-11-21

    It has been clear for many years that insects use visual cues to stabilize their heading in a wind stream. Many animals track odors carried in the wind. As such, visual stabilization of upwind tracking directly aids in odor tracking. But do olfactory signals directly influence visual tracking behavior independently from wind cues? Also, the recent deluge of research on the neurophysiology and neurobehavioral genetics of olfaction in Drosophila has motivated ever more technically sophisticated and quantitative behavioral assays. Here, we modified a magnetic tether system originally devised for vision experiments by equipping the arena with narrow laminar flow odor plumes. A fly is glued to a small steel pin and suspended in a magnetic field that enables it to yaw freely. Small diameter food odor plumes are directed downward over the fly's head, eliciting stable tracking by a hungry fly. Here we focus on the critical mechanics of tethering, aligning the magnets, devising the odor plume, and confirming stable odor tracking.

  1. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  2. Alternative images for perpendicular parking : a usability test of a multi-camera parking assistance system.

    DOT National Transportation Integrated Search

    2004-10-01

    The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...

  3. Figure–ground discrimination behavior in Drosophila. I. Spatial organization of wing-steering responses

    PubMed Central

    Fox, Jessica L.; Aptekar, Jacob W.; Zolotova, Nadezhda M.; Shoemaker, Patrick A.; Frye, Mark A.

    2014-01-01

    The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input. PMID:24198267

  4. Within-hemifield competition in early visual areas limits the ability to track multiple objects with attention.

    PubMed

    Störmer, Viola S; Alvarez, George A; Cavanagh, Patrick

    2014-08-27

    It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. Copyright © 2014 the authors 0270-6474/14/3311526-08$15.00/0.

  5. Bonsai: an event-based framework for processing and controlling data streams

    PubMed Central

    Lopes, Gonçalo; Bonacchi, Niccolò; Frazão, João; Neto, Joana P.; Atallah, Bassam V.; Soares, Sofia; Moreira, Luís; Matias, Sara; Itskov, Pavel M.; Correia, Patrícia A.; Medina, Roberto E.; Calcaterra, Lorenza; Dreosti, Elena; Paton, Joseph J.; Kampff, Adam R.

    2015-01-01

    The design of modern scientific experiments requires the control and monitoring of many different data streams. However, the serial execution of programming instructions in a computer makes it a challenge to develop software that can deal with the asynchronous, parallel nature of scientific data. Here we present Bonsai, a modular, high-performance, open-source visual programming framework for the acquisition and online processing of data streams. We describe Bonsai's core principles and architecture and demonstrate how it allows for the rapid and flexible prototyping of integrated experimental designs in neuroscience. We specifically highlight some applications that require the combination of many different hardware and software components, including video tracking of behavior, electrophysiology and closed-loop control of stimulation. PMID:25904861

  6. Quantifying Pilot Visual Attention in Low Visibility Terminal Operations

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle K.; Arthur, J. J.; Latorella, Kara A.; Kramer, Lynda J.; Shelton, Kevin J.; Norman, Robert M.; Prinzel, Lawrence J.

    2012-01-01

    Quantifying pilot visual behavior allows researchers to determine not only where a pilot is looking and when, but holds implications for specific behavioral tracking when these data are coupled with flight technical performance. Remote eye tracking systems have been integrated into simulators at NASA Langley with effectively no impact on the pilot environment. This paper discusses the installation and use of a remote eye tracking system. The data collection techniques from a complex human-in-the-loop (HITL) research experiment are discussed; especially, the data reduction algorithms and logic to transform raw eye tracking data into quantified visual behavior metrics, and analysis methods to interpret visual behavior. The findings suggest superior performance for Head-Up Display (HUD) and improved attentional behavior for Head-Down Display (HDD) implementations of Synthetic Vision System (SVS) technologies for low visibility terminal area operations. Keywords: eye tracking, flight deck, NextGen, human machine interface, aviation

  7. Combined Feature Based and Shape Based Visual Tracker for Robot Navigation

    NASA Technical Reports Server (NTRS)

    Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.

    2005-01-01

    We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.

  8. Self-Organized Multi-Camera Network for a Fast and Easy Deployment of Ubiquitous Robots in Unknown Environments

    PubMed Central

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V.; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2013-01-01

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal. PMID:23271604

  9. Self-organized multi-camera network for a fast and easy deployment of ubiquitous robots in unknown environments.

    PubMed

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2012-12-27

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.

  10. Flexible ex vivo phantoms for validation of diffusion tensor tractography on a clinical scanner.

    PubMed

    Watanabe, Makoto; Aoki, Shigeki; Masutani, Yoshitaka; Abe, Osamu; Hayashi, Naoto; Masumoto, Tomohiko; Mori, Harushi; Kabasawa, Hiroyuki; Ohtomo, Kuni

    2006-11-01

    The aim of this study was to develop ex vivo diffusion tensor (DT) flexible phantoms. Materials were bundles of textile threads of cotton, monofilament nylon, rayon, and polyester bunched with spiral wrapping bands and immersed in water. DT images were acquired on a 1.5-Tesla clinical magnetic resonance scanner using echo planar imaging sequences with 15 motion probing gradient directions. DT tractography with seeding and a line-tracking method was carried out by software originally developed on a PC-based workstation. We observed relatively high fractional anisotropy on the polyester phantom and were able to reconstruct tractography. Straight tracts along the bundle were displayed when it was arranged linearly. It was easy to bend arcuately or bifurcate at one end; and tracts followed the course of the bundle, whether it was curved or branched and had good agreement with direct visual observation. Tractography with the other fibers was unsuccessful. The polyester phantom revealed a diffusion anisotropic structure according to its shape and would be utilizable repeatedly under the same conditions, differently from living central neuronal system. It would be useful to validate DT sequences and to optimize an algorithm or parameters of DT tractography software. Additionally, the flexibility of the phantom would enable us to model human axonal projections.

  11. The research and application of visual saliency and adaptive support vector machine in target tracking field.

    PubMed

    Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing

    2013-01-01

    The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination.

  12. Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator

    NASA Astrophysics Data System (ADS)

    Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi

    Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.

  13. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    PubMed

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  14. Inferring difficulty: Flexibility in the real-time processing of disfluency

    PubMed Central

    Heller, Daphna; Arnold, Jennifer E.; Klein, Natalie M.; Tanenhaus, Michael K.

    2015-01-01

    Upon hearing a disfluent referring expression, listeners expect the speaker to refer to an object that is previously-unmentioned, an object that does not have a straightforward label, or an object that requires a longer description. Two visual-world eye-tracking experiments examined whether listeners directly associate disfluency with these properties of objects, or whether disfluency attribution is more flexible and involves situation-specific inferences. Since in natural situations reference to objects that do not have a straightforward label or that require a longer description is correlated with both production difficulty and with disfluency, we used a mini artificial lexicon to dissociate difficulty from these properties, building on the fact that recently-learned names take longer to produce than existing words in one’s mental lexicon. The results demonstrate that disfluency attribution involves situation-specific inferences; we propose that in new situations listeners spontaneously infer what may cause production difficulty. However, the results show that these situation-specific inferences are limited in scope: listeners assessed difficulty relative to their own experience with the artificial names, and did not adapt to the assumed knowledge of the speaker. PMID:26677642

  15. Parallel computation of level set method for 500 Hz visual servo control

    NASA Astrophysics Data System (ADS)

    Fei, Xianfeng; Igarashi, Yasunobu; Hashimoto, Koichi

    2008-11-01

    We propose a 2D microorganism tracking system using a parallel level set method and a column parallel vision system (CPV). This system keeps a single microorganism in the middle of the visual field under a microscope by visual servoing an automated stage. We propose a new energy function for the level set method. This function constrains an amount of light intensity inside the detected object contour to control the number of the detected objects. This algorithm is implemented in CPV system and computational time for each frame is 2 [ms], approximately. A tracking experiment for about 25 s is demonstrated. Also we demonstrate a single paramecium can be kept tracking even if other paramecia appear in the visual field and contact with the tracked paramecium.

  16. Attitude tracking control of flexible spacecraft with large amplitude slosh

    NASA Astrophysics Data System (ADS)

    Deng, Mingle; Yue, Baozeng

    2017-12-01

    This paper is focused on attitude tracking control of a spacecraft that is equipped with flexible appendage and partially filled liquid propellant tank. The large amplitude liquid slosh is included by using a moving pulsating ball model that is further improved to estimate the settling location of liquid in microgravity or a zero-g environment. The flexible appendage is modelled as a three-dimensional Bernoulli-Euler beam, and the assumed modal method is employed. A hybrid controller that combines sliding mode control with an adaptive algorithm is designed for spacecraft to perform attitude tracking. The proposed controller has proved to be asymptotically stable. A nonlinear model for the overall coupled system including spacecraft attitude dynamics, liquid slosh, structural vibration and control action is established. Numerical simulation results are presented to show the dynamic behaviors of the coupled system and to verify the effectiveness of the control approach when the spacecraft undergoes the disturbance produced by large amplitude slosh and appendage vibration. Lastly, the designed adaptive algorithm is found to be effective to improve the precision of attitude tracking.

  17. Ocular dynamics and visual tracking performance after Q-switched laser exposure

    NASA Astrophysics Data System (ADS)

    Zwick, Harry; Stuck, Bruce E.; Lund, David J.; Nawim, Maqsood

    2001-05-01

    In previous investigations of q-switched laser retinal exposure in awake task oriented non-human primates (NHPs), the threshold for retinal damage occurred well below that of the threshold for permanent visual function loss. Visual function measures used in these studies involved measures of visual acuity and contrast sensitivity. In the present study, we examine the same relationship for q-switched laser exposure using a visual performance task, where task dependency involves more parafoveal than foveal retina. NHPs were trained on a visual pursuit motor tracking performance task that required maintaining a small HeNe laser spot (0.3 degrees) centered in a slowly moving (0.5deg/sec) annulus. When NHPs reliably produced visual target tracking efficiencies > 80%, single q-switched laser exposures (7 nsec) were made coaxially with the line of sight of the moving target. An infrared camera imaged the pupil during exposure to obtain the pupillary response to the laser flash. Retinal images were obtained with a scanning laser ophthalmoscope 3 days post exposure under ketamine and nembutol anesthesia. Q-switched visible laser exposures at twice the damage threshold produced small (about 50mm) retinal lesions temporal to the fovea; deficits in NHP visual pursuit tracking were transient, demonstrating full recovery to baseline within a single tracking session. Post exposure analysis of the pupillary response demonstrated that the exposure flash entered the pupil, followed by 90 msec refractory period and than a 12 % pupillary contraction within 1.5 sec from the onset of laser exposure. At 6 times the morphological threshold damage level for 532 nm q-switched exposure, longer term losses in NHP pursuit tracking performance were observed. In summary, q-switched laser exposure appears to have a higher threshold for permanent visual performance loss than the corresponding threshold to produce retinal threshold injury. Mechanisms of neural plasticity within the retina and at higher visual brain centers may mediat

  18. Studying visual attention using the multiple object tracking paradigm: A tutorial review.

    PubMed

    Meyerhoff, Hauke S; Papenmeier, Frank; Huff, Markus

    2017-07-01

    Human observers are capable of tracking multiple objects among identical distractors based only on their spatiotemporal information. Since the first report of this ability in the seminal work of Pylyshyn and Storm (1988, Spatial Vision, 3, 179-197), multiple object tracking has attracted many researchers. A reason for this is that it is commonly argued that the attentional processes studied with the multiple object paradigm apparently match the attentional processing during real-world tasks such as driving or team sports. We argue that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention. Indeed, several (partially contradicting) theories of attentive tracking have been proposed within the almost 30 years since its first report, and a large body of research has been conducted to test these theories. With regard to the richness and diversity of this literature, the aim of this tutorial review is to provide researchers who are new in the field of multiple object tracking with an overview over the multiple object tracking paradigm, its basic manipulations, as well as links to other paradigms investigating visual attention and working memory. Further, we aim at reviewing current theories of tracking as well as their empirical evidence. Finally, we review the state of the art in the most prominent research fields of multiple object tracking and how this research has helped to understand visual attention in dynamic settings.

  19. Before your very eyes: the value and limitations of eye tracking in medical education.

    PubMed

    Kok, Ellen M; Jarodzka, Halszka

    2017-01-01

    Medicine is a highly visual discipline. Physicians from many specialties constantly use visual information in diagnosis and treatment. However, they are often unable to explain how they use this information. Consequently, it is unclear how to train medical students in this visual processing. Eye tracking is a research technique that may offer answers to these open questions, as it enables researchers to investigate such visual processes directly by measuring eye movements. This may help researchers understand the processes that support or hinder a particular learning outcome. In this article, we clarify the value and limitations of eye tracking for medical education researchers. For example, eye tracking can clarify how experience with medical images mediates diagnostic performance and how students engage with learning materials. Furthermore, eye tracking can also be used directly for training purposes by displaying eye movements of experts in medical images. Eye movements reflect cognitive processes, but cognitive processes cannot be directly inferred from eye-tracking data. In order to interpret eye-tracking data properly, theoretical models must always be the basis for designing experiments as well as for analysing and interpreting eye-tracking data. The interpretation of eye-tracking data is further supported by sound experimental design and methodological triangulation. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  20. Visual Contrast Sensitivity Functions Obtained from Untrained Observers Using Tracking and Staircase Procedures. Final Report.

    ERIC Educational Resources Information Center

    Geri, George A.; Hubbard, David C.

    Two adaptive psychophysical procedures (tracking and "yes-no" staircase) for obtaining human visual contrast sensitivity functions (CSF) were evaluated. The procedures were chosen based on their proven validity and the desire to evaluate the practical effects of stimulus transients, since tracking procedures traditionally employ gradual…

  1. Dynamics of vehicles in variable velocity runs over non-homogeneous flexible track and foundation with two point input models

    NASA Astrophysics Data System (ADS)

    Yadav, D.; Upadhyay, H. C.

    1992-07-01

    Vehicles obtain track-induced input through the wheels, which commonly number more than one. Analysis available for the vehicle response in a variable velocity run on a non-homogeneously profiled flexible track supported by compliant inertial foundation is for a linear heave model having a single ground input. This analysis is being extended to two point input models with heave-pitch and heave-roll degrees of freedom. Closed form expressions have been developed for the system response statistics. Results are presented for a railway coach and track/foundation problem, and the performances of heave, heave-pitch and heave-roll models have been compared. The three models are found to agree in describing the track response. However, the vehicle sprung mass behaviour is predicted to be different by these models, indicating the strong effect of coupling on the vehicle vibration.

  2. Reconstructing the behavior of walking fruit flies

    NASA Astrophysics Data System (ADS)

    Berman, Gordon; Bialek, William; Shaevitz, Joshua

    2010-03-01

    Over the past century, the fruit fly Drosophila melanogaster has arisen as almost a lingua franca in the study of animal behavior, having been utilized to study questions in fields as diverse as sleep deprivation, aging, and drug abuse, amongst many others. Accordingly, much is known about what can be done to manipulate these organisms genetically, behaviorally, and physiologically. Most of the behavioral work on this system to this point has been experiments where the flies in question have been given a choice between some discrete set of pre-defined behaviors. Our aim, however, is simply to spend some time with a cadre of flies, using techniques from nonlinear dynamics, statistical physics, and machine learning in an attempt to reconstruct and gain understanding into their behavior. More specifically, we use a multi-camera set-up combined with a motion tracking stage in order to obtain long time-series of walking fruit flies moving about a glass plate. This experimental system serves as a test-bed for analytical, statistical, and computational techniques for studying animal behavior. In particular, we attempt to reconstruct the natural modes of behavior for a fruit fly through a data-driven approach in a manner inspired by recent work in C. elegans and cockroaches.

  3. Electrodeposited Ni nanowires-track etched P.E.T. composites as selective solar absorbers

    NASA Astrophysics Data System (ADS)

    Lukhwa, R.; Sone, B.; Kotsedi, L.; Madjoe, R.; Maaza, M.

    2018-05-01

    This contribution reports on the structural, optical and morphological properties of nanostructured flexible solar-thermal selective absorber composites for low temperature applications. The candidate material in the system is consisting of electrodeposited nickel nano-cylinders embedded in track-etched polyethylene terephthalate (PET) host membrane of pore sizes ranging between 0.3-0.8µm supported by conductive nickel thin film of about 0.5µm. PET were irradiated with 11MeV/u high charged xenon (Xe) ions at normal incidence. The tubular and metallic structure of the nickel nano-cylinders within the insulator polymeric host forms a typical ceramic-metal nano-composite "Cermet". The produced material was characterized by the following techniques: X-ray diffraction (XRD) for structural characterization to determine preferred crystallographic structure, and grain size of the materials; Scanning electron microscopy (SEM) to determine surface morphology, particle size, and visual imaging of distribution of structures on the surface of the substrate; Atomic force microscopy (AFM) to characterize surface roughness, surface morphology, and film thickness, and UV-Vis-NIR spectrophotometer to measure the reflectance, then to determine solar absorption

  4. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    PubMed

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye tracking literature in radiology indicates several search patterns are related to high levels of expertise, but teaching novices to search as an expert may not be effective. Experimental research is needed to find out which search strategies can improve image perception in learners.

  5. Nested Tracking Graphs

    DOE PAGES

    Lukasczyk, Jonas; Weber, Gunther; Maciejewski, Ross; ...

    2017-06-01

    Tracking graphs are a well established tool in topological analysis to visualize the evolution of components and their properties over time, i.e., when components appear, disappear, merge, and split. However, tracking graphs are limited to a single level threshold and the graphs may vary substantially even under small changes to the threshold. To examine the evolution of features for varying levels, users have to compare multiple tracking graphs without a direct visual link between them. We propose a novel, interactive, nested graph visualization based on the fact that the tracked superlevel set components for different levels are related to eachmore » other through their nesting hierarchy. This approach allows us to set multiple tracking graphs in context to each other and enables users to effectively follow the evolution of components for different levels simultaneously. We show the effectiveness of our approach on datasets from finite pointset methods, computational fluid dynamics, and cosmology simulations.« less

  6. Emerging applications of eye-tracking technology in dermatology.

    PubMed

    John, Kevin K; Jensen, Jakob D; King, Andy J; Pokharel, Manusheela; Grossman, Douglas

    2018-04-06

    Eye-tracking technology has been used within a multitude of disciplines to provide data linking eye movements to visual processing of various stimuli (i.e., x-rays, situational positioning, printed information, and warnings). Despite the benefits provided by eye-tracking in allowing for the identification and quantification of visual attention, the discipline of dermatology has yet to see broad application of the technology. Notwithstanding dermatologists' heavy reliance upon visual patterns and cues to discriminate between benign and atypical nevi, literature that applies eye-tracking to the study of dermatology is sparse; and literature specific to patient-initiated behaviors, such as skin self-examination (SSE), is largely non-existent. The current article provides a review of eye-tracking research in various medical fields, culminating in a discussion of current applications and advantages of eye-tracking for dermatology research. Copyright © 2018 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.

  7. Anti-disturbance rapid vibration suppression of the flexible aerial refueling hose

    NASA Astrophysics Data System (ADS)

    Su, Zikang; Wang, Honglun; Li, Na

    2018-05-01

    As an extremely dangerous phenomenon in autonomous aerial refueling (AAR), the flexible refueling hose vibration caused by the receiver aircraft's excessive closure speed should be suppressed once it appears. This paper proposed a permanent magnet synchronous motor (PMSM) based refueling hose servo take-up system for the vibration suppression of the flexible refueling hose. A rapid back-stepping based anti-disturbance nonsingular fast terminal sliding mode (NFTSM) control scheme with a specially established finite-time convergence NFTSM observer is proposed for the PMSM based hose servo take-up system under uncertainties and disturbances. The unmeasured load torque and other disturbances in the PMSM system are reconstituted by the NFTSM observer and to be compensated during the controller design. Then, with the back-stepping technique, a rapid anti-disturbance NFTSM controller is proposed for the PMSM angular tracking to improve the tracking error convergence speed and tracking precision. The proposed vibration suppression scheme is then applied to PMSM based hose servo take-up system for the refueling hose vibration suppression in AAR. Simulation results show the proposed scheme can suppress the hose vibration rapidly and accurately even the system is exposed to strong uncertainties and probe position disturbances, it is more competitive in tracking accuracy, tracking error convergence speed and robustness.

  8. Visual Attention for Solving Multiple-Choice Science Problem: An Eye-Tracking Analysis

    ERIC Educational Resources Information Center

    Tsai, Meng-Jung; Hou, Huei-Tse; Lai, Meng-Lung; Liu, Wan-Yi; Yang, Fang-Ying

    2012-01-01

    This study employed an eye-tracking technique to examine students' visual attention when solving a multiple-choice science problem. Six university students participated in a problem-solving task to predict occurrences of landslide hazards from four images representing four combinations of four factors. Participants' responses and visual attention…

  9. Frequency encoded auditory display of the critical tracking task

    NASA Technical Reports Server (NTRS)

    Stevenson, J.

    1984-01-01

    The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.

  10. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  11. Storyline Visualizations of Eye Tracking of Movie Viewing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balint, John T.; Arendt, Dustin L.; Blaha, Leslie M.

    Storyline visualizations offer an approach that promises to capture the spatio-temporal characteristics of individual observers and simultaneously illustrate emerging group behaviors. We develop a visual analytics approach to parsing, aligning, and clustering fixation sequences from eye tracking data. Visualization of the results captures the similarities and differences across a group of observers performing a common task. We apply our storyline approach to visualize gaze patterns of people watching dynamic movie clips. Storylines mitigate some of the shortcomings of existent spatio-temporal visualization techniques and, importantly, continue to highlight individual observer behavioral dynamics.

  12. Secondary visual workload capability with primary visual and kinesthetic-tactual displays

    NASA Technical Reports Server (NTRS)

    Gilson, R. D.; Burke, M. W.; Jagacinski, R. J.

    1978-01-01

    Subjects performed a cross-adaptive tracking task with a visual secondary display and either a visual or a quickened kinesthetic-tactual (K-T) primary display. The quickened K-T display resulted in superior secondary task performance. Comparisons of secondary workload capability with integrated and separated visual displays indicated that the superiority of the quickened K-T display was not simply due to the elimination of visual scanning. When subjects did not have to perform a secondary task, there was no significant difference between visual and quickened K-T displays in performing a critical tracking task.

  13. CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.

    PubMed

    Bray, Mark-Anthony; Carpenter, Anne E

    2015-11-04

    Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.

  14. Optimal Output Trajectory Redesign for Invertible Systems

    NASA Technical Reports Server (NTRS)

    Devasia, S.

    1996-01-01

    Given a desired output trajectory, inversion-based techniques find input-state trajectories required to exactly track the output. These inversion-based techniques have been successfully applied to the endpoint tracking control of multijoint flexible manipulators and to aircraft control. The specified output trajectory uniquely determines the required input and state trajectories that are found through inversion. These input-state trajectories exactly track the desired output; however, they might not meet acceptable performance requirements. For example, during slewing maneuvers of flexible structures, the structural deformations, which depend on the required state trajectories, may be unacceptably large. Further, the required inputs might cause actuator saturation during an exact tracking maneuver, for example, in the flight control of conventional takeoff and landing aircraft. In such situations, a compromise is desired between the tracking requirement and other goals such as reduction of internal vibrations and prevention of actuator saturation; the desired output trajectory needs to redesigned. Here, we pose the trajectory redesign problem as an optimization of a general quadratic cost function and solve it in the context of linear systems. The solution is obtained as an off-line prefilter of the desired output trajectory. An advantage of our technique is that the prefilter is independent of the particular trajectory. The prefilter can therefore be precomputed, which is a major advantage over other optimization approaches. Previous works have addressed the issue of preshaping inputs to minimize residual and in-maneuver vibrations for flexible structures; Since the command preshaping is computed off-line. Further minimization of optimal quadratic cost functions has also been previously use to preshape command inputs for disturbance rejection. All of these approaches are applicable when the inputs to the system are known a priori. Typically, outputs (not inputs) are specified in tracking problems, and hence the input trajectories have to be computed. The inputs to the system are however, difficult to determine for non-minimum phase systems like flexible structures. One approach to solve this problem is to (1) choose a tracking controller (the desired output trajectory is now an input to the closed-loop system and (2) redesign this input to the closed-loop system. Thus we effectively perform output redesign. These redesigns are however, dependent on the choice of the tracking controllers. Thus the controller optimization and trajectory redesign problems become coupled; this coupled optimization is still an open problem. In contrast, we decouple the trajectory redesign problem from the choice of feedback-based tracking controller. It is noted that our approach remains valid when a particular tracking controller is chosen. In addition, the formulation of our problem not only allows for the minimization of residual vibration as in available techniques but also allows for the optimal reduction fo vibrations during the maneuver, e.g., the altitude control of flexible spacecraft. We begin by formulating the optimal output trajectory redesign problem and then solve it in the context of general linear systems. This theory is then applied to an example flexible structure, and simulation results are provided.

  15. MARVIN: a medical research application framework based on open source software.

    PubMed

    Rudolph, Tobias; Puls, Marc; Anderegg, Christoph; Ebert, Lars; Broehan, Martina; Rudin, Adrian; Kowal, Jens

    2008-08-01

    This paper describes the open source framework MARVIN for rapid application development in the field of biomedical and clinical research. MARVIN applications consist of modules that can be plugged together in order to provide the functionality required for a specific experimental scenario. Application modules work on a common patient database that is used to store and organize medical data as well as derived data. MARVIN provides a flexible input/output system with support for many file formats including DICOM, various 2D image formats and surface mesh data. Furthermore, it implements an advanced visualization system and interfaces to a wide range of 3D tracking hardware. Since it uses only highly portable libraries, MARVIN applications run on Unix/Linux, Mac OS X and Microsoft Windows.

  16. Introduction to Vector Field Visualization

    NASA Technical Reports Server (NTRS)

    Kao, David; Shen, Han-Wei

    2010-01-01

    Vector field visualization techniques are essential to help us understand the complex dynamics of flow fields. These can be found in a wide range of applications such as study of flows around an aircraft, the blood flow in our heart chambers, ocean circulation models, and severe weather predictions. The vector fields from these various applications can be visually depicted using a number of techniques such as particle traces and advecting textures. In this tutorial, we present several fundamental algorithms in flow visualization including particle integration, particle tracking in time-dependent flows, and seeding strategies. For flows near surfaces, a wide variety of synthetic texture-based algorithms have been developed to depict near-body flow features. The most common approach is based on the Line Integral Convolution (LIC) algorithm. There also exist extensions of LIC to support more flexible texture generations for 3D flow data. This tutorial reviews these algorithms. Tensor fields are found in several real-world applications and also require the aid of visualization to help users understand their data sets. Examples where one can find tensor fields include mechanics to see how material respond to external forces, civil engineering and geomechanics of roads and bridges, and the study of neural pathway via diffusion tensor imaging. This tutorial will provide an overview of the different tensor field visualization techniques, discuss basic tensor decompositions, and go into detail on glyph based methods, deformation based methods, and streamline based methods. Practical examples will be used when presenting the methods; and applications from some case studies will be used as part of the motivation.

  17. Muscle Strength and Flexibility without and with Visual Impairments Judoka's

    ERIC Educational Resources Information Center

    Karakoc, Onder

    2016-01-01

    The aim of this study was to examine muscle strength and flexibility of judoka with and without visual impairments. A total of 32 male national judoka volunteered to participate in this study. There were 20 male judoka without visual impairments (mean ± SD; age: 19.20 ± 5.76 years, body weight: 66.45 ± 11.09 kg, height: 169.60 ± 7.98 cm, sport…

  18. Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.

    PubMed

    Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong

    2016-08-01

    The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.

  19. Rover-based visual target tracking validation and mission infusion

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Steele, Robert D.; Ansar, Adnan I.; Ali, Khaled; Nesnas, Issa

    2005-01-01

    The Mars Exploration Rovers (MER'03), Spirit and Opportunity, represent the state of the art in rover operations on Mars. This paper presents validation experiments of different visual tracking algorithms using the rover's navigation camera.

  20. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  1. Modelling of structural flexiblity in multibody railroad vehicle systems

    NASA Astrophysics Data System (ADS)

    Escalona, José L.; Sugiyama, Hiroyuki; Shabana, Ahmed A.

    2013-07-01

    This paper presents a review of recent research investigations on the computer modelling of flexible bodies in railroad vehicle systems. The paper will also discuss the influence of the structural flexibility of various components, including the wheelset, the truck frames, tracks, pantograph/catenary systems, and car bodies, on the dynamics of railroad vehicles. While several formulations and computer techniques for modelling structural flexibility are discussed in this paper, a special attention is paid to the floating frame of reference formulation which is widely used and leads to reduced-order finite-element models for flexible bodies by employing component modes synthesis techniques. Other formulations and numerical methods such as semi-analytical approaches, absolute nodal coordinate formulation, finite-segment method, boundary elements method, and discrete elements method are also discussed. This investigation is motivated by the fact that the structural flexibility can have a significant effect on the overall dynamics of railroad vehicles, ride comfort, vibration suppression and noise level reduction, lateral stability, track response to vehicle forces, stress analysis, wheel-rail contact forces, wear and crashworthiness.

  2. A Framework for People Re-Identification in Multi-Camera Surveillance Systems

    ERIC Educational Resources Information Center

    Ammar, Sirine; Zaghden, Nizar; Neji, Mahmoud

    2017-01-01

    People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance system with disjoint cameras. This paper is focused on the implementation of a human re-identification system. First the face of detected people is divided into three parts and some soft-biometric traits are…

  3. Learning from Lessons: Teachers' Insights and Intended Actions Arising from Their Learning about Student Thinking

    ERIC Educational Resources Information Center

    Roche, Anne; Clarke, Doug; Clarke, David; Chan, Man Ching Esther

    2016-01-01

    A central premise of this project is that teachers learn from the act of teaching a lesson and that this learning is evident in the planning and teaching of a subsequent lesson. We are studying the knowledge construction of mathematics teachers utilising multi-camera research techniques during lesson planning, classroom interactions and…

  4. The California All-sky Meteor Surveillance (CAMS) System

    NASA Astrophysics Data System (ADS)

    Gural, P. S.

    2011-01-01

    A unique next generation multi-camera, multi-site video meteor system is being developed and deployed in California to provide high accuracy orbits of simultaneously captured meteors. Included herein is a description of the goals, concept of operations, hardware, and software development progress. An appendix contains a meteor camera performance trade study made for video systems circa 2010.

  5. Evaluation of Kapton pyrolysis, arc tracking, and flashover on SiO(x)-coated polyimide insulated samples of flat flexible current carriers for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Stueber, Thomas J.; Mundson, Chris

    1993-01-01

    Kapton polyimide wiring insulation was found to be vulnerable to pyrolization, arc tracking, and flashover when momentary short-circuit arcs have occurred on aircraft power systems. Short-circuit arcs between wire pairs can pyrolize the polyimide resulting in a conductive char between conductors that may sustain the arc (arc tracking). Furthermore, the arc tracking may spread (flashover) to other wire pairs within a wire bundle. Polyimide Kapton will also be used as the insulating material for the flexible current carrier (FCC) of Space Station Freedom (SSF). The FCC, with conductors in a planar type geometric layout as opposed to bundles, is known to sustain arc tracking at proposed SSF power levels. Tests were conducted in a vacuum bell jar that was designed to conduct polyimide pyrolysis, arc tracking, and flashover studies on samples of SSF's FCC. Test results will be reported concerning the minimal power level needed to sustain arc tracking and the FCC susceptibility to flashover. Results of the FCC arc tracking tests indicate that only 22 volt amps were necessary to sustain arc tracking (proposed SSF power level is 400 watts). FCC flashover studies indicate that the flashover event is highly unlikely.

  6. The contributions of visual and central attention to visual working memory.

    PubMed

    Souza, Alessandra S; Oberauer, Klaus

    2017-10-01

    We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.

  7. Disturbance observer-based fuzzy control for flexible spacecraft combined attitude & sun tracking system

    NASA Astrophysics Data System (ADS)

    Chak, Yew-Chung; Varatharajoo, Renuganth; Razoumny, Yury

    2017-04-01

    This paper investigates the combined attitude and sun-tracking control problem in the presence of external disturbances and internal disturbances, caused by flexible appendages. A new method based on Pythagorean trigonometric identity is proposed to drive the solar arrays. Using the control input and attitude output, a disturbance observer is developed to estimate the lumped disturbances consisting of the external and internal disturbances, and then compensated by the disturbance observer-based controller via a feed-forward control. The stability analysis demonstrates that the desired attitude trajectories are followed even in the presence of external disturbance and internal flexible modes. The main features of the proposed control scheme are that it can be designed separately and incorporated into the baseline controller to form the observer-based control system, and the combined attitude and sun-tracking control is achieved without the conventional attitude actuators. The attitude and sun-tracking performance using the proposed strategy is evaluated and validated through numerical simulations. The proposed control solution can serve as a fail-safe measure in case of failure of the conventional attitude actuator, which triggered by automatic reconfiguration of the attitude control components.

  8. Normal aging delays and compromises early multifocal visual attention during object tracking.

    PubMed

    Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman

    2013-02-01

    Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.

  9. Predictive and tempo-flexible synchronization to a visual metronome in monkeys.

    PubMed

    Takeya, Ryuji; Kameda, Masashi; Patel, Aniruddh D; Tanaka, Masaki

    2017-07-21

    Predictive and tempo-flexible synchronization to an auditory beat is a fundamental component of human music. To date, only certain vocal learning species show this behaviour spontaneously. Prior research training macaques (vocal non-learners) to tap to an auditory or visual metronome found their movements to be largely reactive, not predictive. Does this reflect the lack of capacity for predictive synchronization in monkeys, or lack of motivation to exhibit this behaviour? To discriminate these possibilities, we trained monkeys to make synchronized eye movements to a visual metronome. We found that monkeys could generate predictive saccades synchronized to periodic visual stimuli when an immediate reward was given for every predictive movement. This behaviour generalized to novel tempi, and the monkeys could maintain the tempo internally. Furthermore, monkeys could flexibly switch from predictive to reactive saccades when a reward was given for each reactive response. In contrast, when humans were asked to make a sequence of reactive saccades to a visual metronome, they often unintentionally generated predictive movements. These results suggest that even vocal non-learners may have the capacity for predictive and tempo-flexible synchronization to a beat, but that only certain vocal learning species are intrinsically motivated to do it.

  10. A Neurobehavioral Model of Flexible Spatial Language Behaviors

    ERIC Educational Resources Information Center

    Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schoner, Gregor

    2012-01-01

    We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that…

  11. Advanced shape tracking to improve flexible endoscopic diagnostics

    NASA Astrophysics Data System (ADS)

    Cao, Caroline G. L.; Wong, Peter Y.; Lilge, Lothar; Gavalis, Robb M.; Xing, Hua; Zamarripa, Nate

    2008-03-01

    Colonoscopy is the gold standard for screening for inflammatory bowel disease and colorectal cancer. Flexible endoscopes are difficult to manipulate, especially in the distensible and tortuous colon, sometimes leading to disorientation during the procedure and missed diagnosis of lesions. Our goal is to design a navigational aid to guide colonoscopies, presenting a three dimensional representation of the endoscope in real-time. Therefore, a flexible sensor that can track the position and shape of the entire length of the endoscope is needed. We describe a novel shape-tracking technology utilizing a single modified optical fiber. By embedding fluorophores in the buffer of the fiber, we demonstrated a relationship between fluorescence intensity and fiber curvature. As much as a 40% increase in fluorescence intensity was achieved when the fiber's local bend radius decreased from 58 mm to 11 mm. This approach allows for the construction of a three-dimensional shape tracker that is small enough to be easily inserted into the biopsy channel of current endoscopes.

  12. Modeling and controller design of a 6-DOF precision positioning system

    NASA Astrophysics Data System (ADS)

    Cai, Kunhai; Tian, Yanling; Liu, Xianping; Fatikow, Sergej; Wang, Fujun; Cui, Liangyu; Zhang, Dawei; Shirinzadeh, Bijan

    2018-05-01

    A key hurdle to meet the needs of micro/nano manipulation in some complex cases is the inadequate workspace and flexibility of the operation ends. This paper presents a 6-degree of freedom (DOF) serial-parallel precision positioning system, which consists of two compact type 3-DOF parallel mechanisms. Each parallel mechanism is driven by three piezoelectric actuators (PEAs), guided by three symmetric T-shape hinges and three elliptical flexible hinges, respectively. It can extend workspace and improve flexibility of the operation ends. The proposed system can be assembled easily, which will greatly reduce the assembly errors and improve the positioning accuracy. In addition, the kinematic and dynamic model of the 6-DOF system are established, respectively. Furthermore, in order to reduce the tracking error and improve the positioning accuracy, the Discrete-time Model Predictive Controller (DMPC) is applied as an effective control method. Meanwhile, the effectiveness of the DMCP control method is verified. Finally, the tracking experiment is performed to verify the tracking performances of the 6-DOF stage.

  13. Cross-Modal Attention Effects in the Vestibular Cortex during Attentive Tracking of Moving Objects.

    PubMed

    Frank, Sebastian M; Sun, Liwei; Forster, Lisa; Tse, Peter U; Greenlee, Mark W

    2016-12-14

    The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking. In this study we investigate cross-modal attention effects in the human vestibular cortex. We applied the visual multiple-object tracking task because it is known to evoke attentional load effects on neural activity in visual motion-processing and attention-processing areas. Here we demonstrate a load-dependent effect of attention on the activation in the vestibular cortex, despite constant visual motion stimulation. We find that activity in the parietoinsular vestibular cortex is more strongly suppressed the greater the attentional load on the visual tracking task. These findings suggest cross-modal attentional modulation in the vestibular cortex. Copyright © 2016 the authors 0270-6474/16/3612720-09$15.00/0.

  14. Electromagnetic tracking of motion in the proximity of computer generated graphical stimuli: a tutorial.

    PubMed

    Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael

    2013-09-01

    Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.

  15. Correlation Filter Learning Toward Peak Strength for Visual Tracking.

    PubMed

    Sui, Yao; Wang, Guanghui; Zhang, Li

    2018-04-01

    This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.

  16. Hue distinctiveness overrides category in determining performance in multiple object tracking.

    PubMed

    Sun, Mengdan; Zhang, Xuemin; Fan, Lingxia; Hu, Luming

    2018-02-01

    The visual distinctiveness between targets and distractors can significantly facilitate performance in multiple object tracking (MOT), in which color is a feature that has been commonly used. However, the processing of color can be more than "visual." Color is continuous in chromaticity, while it is commonly grouped into discrete categories (e.g., red, green). Evidence from color perception suggested that color categories may have a unique role in visual tasks independent of its chromatic appearance. Previous MOT studies have not examined the effect of chromatic and categorical distinctiveness on tracking separately. The current study aimed to reveal how chromatic (hue) and categorical distinctiveness of color between the targets and distractors affects tracking performance. With four experiments, we showed that tracking performance was largely facilitated by the increasing hue distance between the target set and the distractor set, suggesting that perceptual grouping was formed based on hue distinctiveness to aid tracking. However, we found no color categorical effect, because tracking performance was not significantly different when the targets and distractors were from the same or different categories. It was concluded that the chromatic distinctiveness of color overrides category in determining tracking performance, suggesting a dominant role of perceptual feature in MOT.

  17. Detection of differential viewing patterns to erotic and non-erotic stimuli using eye-tracking methodology.

    PubMed

    Lykins, Amy D; Meana, Marta; Kambe, Gretchen

    2006-10-01

    As a first step in the investigation of the role of visual attention in the processing of erotic stimuli, eye-tracking methodology was employed to measure eye movements during erotic scene presentation. Because eye-tracking is a novel methodology in sexuality research, we attempted to determine whether the eye-tracker could detect differences (should they exist) in visual attention to erotic and non-erotic scenes. A total of 20 men and 20 women were presented with a series of erotic and non-erotic images and tracked their eye movements during image presentation. Comparisons between erotic and non-erotic image groups showed significant differences on two of three dependent measures of visual attention (number of fixations and total time) in both men and women. As hypothesized, there was a significant Stimulus x Scene Region interaction, indicating that participants visually attended to the body more in the erotic stimuli than in the non-erotic stimuli, as evidenced by a greater number of fixations and longer total time devoted to that region. These findings provide support for the application of eye-tracking methodology as a measure of visual attentional capture in sexuality research. Future applications of this methodology to expand our knowledge of the role of cognition in sexuality are suggested.

  18. A comprehensive model of the railway wheelset-track interaction in curves

    NASA Astrophysics Data System (ADS)

    Martínez-Casas, José; Di Gialleonardo, Egidio; Bruni, Stefano; Baeza, Luis

    2014-09-01

    Train-track interaction has been extensively studied in the last 40 years at least, leading to modelling approaches that can deal satisfactorily with many dynamic problems arising at the wheel/rail interface. However, the available models are usually not considering specifically the running dynamics of the vehicle in a curve, whereas a number of train-track interaction phenomena are specific to curve negotiation. The aim of this paper is to define a model for a flexible wheelset running on a flexible curved track. The main novelty of this work is to combine a trajectory coordinate set with Eulerian modal coordinates; the former permits to consider curved tracks, and the latter models the small relative displacements between the trajectory frame and the solid. In order to reduce the computational complexity of the problem, one single flexible wheelset is considered instead of one complete bogie, and suitable forces are prescribed at the primary suspension seats so that the mean values of the creepages and contact forces are consistent with the low frequency curving dynamics of the complete vehicle. The wheelset model is coupled to a cyclic track model having constant curvature by means of a wheel/rail contact model which accounts for the actual geometry of the contacting profiles and for the nonlinear relationship between creepages and creep forces. The proposed model can be used to analyse a variety of dynamic problems for railway vehicles, including rail corrugation and wheel polygonalisation, squeal noise, numerical estimation of the wheelset service loads. In this paper, simulation results are presented for some selected running conditions to exemplify the application of the model to the study of realistic train-track interaction cases and to point out the importance of curve negotiation effects specifically addressed in the work.

  19. Psyplot: Visualizing rectangular and triangular Climate Model Data with Python

    NASA Astrophysics Data System (ADS)

    Sommer, Philipp

    2016-04-01

    The development and use of climate models often requires the visualization of geo-referenced data. Creating visualizations should be fast, attractive, flexible, easily applicable and easily reproducible. There is a wide range of software tools available for visualizing raster data, but they often are inaccessible to many users (e.g. because they are difficult to use in a script or have low flexibility). In order to facilitate easy visualization of geo-referenced data, we developed a new framework called "psyplot," which can aid earth system scientists with their daily work. It is purely written in the programming language Python and primarily built upon the python packages matplotlib, cartopy and xray. The package can visualize data stored on the hard disk (e.g. NetCDF, GeoTIFF, any other file format supported by the xray package), or directly from the memory or Climate Data Operators (CDOs). Furthermore, data can be visualized on a rectangular grid (following or not following the CF Conventions) and on a triangular grid (following the CF or UGRID Conventions). Psyplot visualizes 2D scalar and vector fields, enabling the user to easily manage and format multiple plots at the same time, and to export the plots into all common picture formats and movies covered by the matplotlib package. The package can currently be used in an interactive python session or in python scripts, and will soon be developed for use with a graphical user interface (GUI). Finally, the psyplot framework enables flexible configuration, allows easy integration into other scripts that uses matplotlib, and provides a flexible foundation for further development.

  20. Experimental study of adaptive pointing and tracking for large flexible space structures

    NASA Technical Reports Server (NTRS)

    Boussalis, D.; Bayard, D. S.; Ih, C.; Wang, S. J.; Ahmed, A.

    1991-01-01

    This paper describes an experimental study of adaptive pointing and tracking control for flexible spacecraft conducted on a complex ground experiment facility. The algorithm used in this study is based on a multivariable direct model reference adaptive control law. Several experimental validation studies were performed earlier using this algorithm for vibration damping and robust regulation, with excellent results. The current work extends previous studies by addressing the pointing and tracking problem. As is consistent with an adaptive control framework, the plant is assumed to be poorly known to the extent that only system level knowledge of its dynamics is available. Explicit bounds on the steady-state pointing error are derived as functions of the adaptive controller design parameters. It is shown that good tracking performance can be achieved in an experimental setting by adjusting adaptive controller design weightings according to the guidelines indicated by the analytical expressions for the error.

  1. Flexible Contrast Gain Control in the Right Hemisphere

    ERIC Educational Resources Information Center

    Okubo, Matia; Nicholls, Michael E. R.

    2005-01-01

    This study investigates whether the right hemisphere has more flexible contrast gain control settings for the identification of spatial frequency. Right-handed participants identified 1 and 9 cycles per degree sinusoidal gratings presented either to the left visual field-right hemisphere (LVF-RH) or the right visual field-left hemisphere (RVF-LH).…

  2. A Three Dimensional Kinematic and Kinetic Study of the Golf Swing

    PubMed Central

    Nesbit, Steven M.

    2005-01-01

    This paper discusses the three-dimensional kinematics and kinetics of a golf swing as performed by 84 male and one female amateur subjects of various skill levels. The analysis was performed using a variable full-body computer model of a human coupled with a flexible model of a golf club. Data to drive the model was obtained from subject swings recorded using a multi-camera motion analysis system. Model output included club trajectories, golfer/club interaction forces and torques, work and power, and club deflections. These data formed the basis for a statistical analysis of all subjects, and a detailed analysis and comparison of the swing characteristics of four of the subjects. The analysis generated much new data concerning the mechanics of the golf swing. It revealed that a golf swing is a highly coordinated and individual motion and subject-to-subject variations were significant. The study highlighted the importance of the wrists in generating club head velocity and orienting the club face. The trajectory of the hands and the ability to do work were the factors most closely related to skill level. Key Points Full-body model of the golf swing. Mechanical description of the golf swing. Statistical analysis of golf swing mechanics. Comparisons of subject swing mechanics PMID:24627665

  3. A three dimensional kinematic and kinetic study of the golf swing.

    PubMed

    Nesbit, Steven M

    2005-12-01

    This paper discusses the three-dimensional kinematics and kinetics of a golf swing as performed by 84 male and one female amateur subjects of various skill levels. The analysis was performed using a variable full-body computer model of a human coupled with a flexible model of a golf club. Data to drive the model was obtained from subject swings recorded using a multi-camera motion analysis system. Model output included club trajectories, golfer/club interaction forces and torques, work and power, and club deflections. These data formed the basis for a statistical analysis of all subjects, and a detailed analysis and comparison of the swing characteristics of four of the subjects. The analysis generated much new data concerning the mechanics of the golf swing. It revealed that a golf swing is a highly coordinated and individual motion and subject-to-subject variations were significant. The study highlighted the importance of the wrists in generating club head velocity and orienting the club face. The trajectory of the hands and the ability to do work were the factors most closely related to skill level. Key PointsFull-body model of the golf swing.Mechanical description of the golf swing.Statistical analysis of golf swing mechanics.Comparisons of subject swing mechanics.

  4. Superconducting Multilayer High-Density Flexible Printed Circuit Board for Very High Thermal Resistance Interconnections

    NASA Astrophysics Data System (ADS)

    de la Broïse, Xavier; Le Coguie, Alain; Sauvageot, Jean-Luc; Pigot, Claude; Coppolani, Xavier; Moreau, Vincent; d'Hollosy, Samuel; Knarosovski, Timur; Engel, Andreas

    2018-05-01

    We have successively developed two superconducting flexible PCBs for cryogenic applications. The first one is monolayer, includes 552 tracks (10 µm wide, 20 µm spacing), and receives 24 wire-bonded integrated circuits. The second one is multilayer, with one track layer between two shielding layers interconnected by microvias, includes 37 tracks, and can be interconnected at both ends by wire bonding or by connectors. The first cold measurements have been performed and show good performances. The novelty of these products is, for the first one, the association of superconducting materials with very narrow pitch and bonded integrated circuits and, for the second one, the introduction of a superconducting multilayer structure interconnected by vias which is, to our knowledge, a world-first.

  5. A visual tracking method based on deep learning without online model updating

    NASA Astrophysics Data System (ADS)

    Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei

    2018-02-01

    The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.

  6. Self-motivated visual scanning predicts flexible navigation in a virtual environment.

    PubMed

    Ploran, Elisabeth J; Bevitt, Jacob; Oshiro, Jaris; Parasuraman, Raja; Thompson, James C

    2014-01-01

    The ability to navigate flexibly (e.g., reorienting oneself based on distal landmarks to reach a learned target from a new position) may rely on visual scanning during both initial experiences with the environment and subsequent test trials. Reliance on visual scanning during navigation harkens back to the concept of vicarious trial and error, a description of the side-to-side head movements made by rats as they explore previously traversed sections of a maze in an attempt to find a reward. In the current study, we examined if visual scanning predicted the extent to which participants would navigate to a learned location in a virtual environment defined by its position relative to distal landmarks. Our results demonstrated a significant positive relationship between the amount of visual scanning and participant accuracy in identifying the trained target location from a new starting position as long as the landmarks within the environment remain consistent with the period of original learning. Our findings indicate that active visual scanning of the environment is a deliberative attentional strategy that supports the formation of spatial representations for flexible navigation.

  7. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion

    PubMed Central

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-01-01

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145

  8. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    PubMed

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  9. Learning from Lessons: Studying the Construction of Teacher Knowledge Catalysed by Purposefully-Designed Experimental Mathematics Lessons

    ERIC Educational Resources Information Center

    Clarke, Doug; Clarke, David; Roche, Anne; Chan, Man Ching Esther

    2015-01-01

    A central premise of this project is that teachers learn from the act of teaching a lesson and that this learning is evident in the planning and teaching of a subsequent lesson. In this project, the knowledge construction of mathematics teachers was examined utilising multi-camera research techniques during lesson planning, classroom interactions…

  10. Bird Vision System

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Bird Vision system is a multicamera photogrammerty software application that runs on a Microsoft Windows XP platform and was developed at Kennedy Space Center by ASRC Aerospace. This software system collects data about the locations of birds within a volume centered on the Space Shuttle and transmits it in real time to the laptop computer of a test director in the Launch Control Center (LCC) Firing Room.

  11. Oblique Aerial Photography Tool for Building Inspection and Damage Assessment

    NASA Astrophysics Data System (ADS)

    Murtiyoso, A.; Remondino, F.; Rupnik, E.; Nex, F.; Grussenmeyer, P.

    2014-11-01

    Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical sensor technology across a growing geospatial market, as complementary to the traditional vertical views. Multi-camera aerial systems capture not only the conventional nadir views, but also tilted images at the same time. In this paper, a particular use of such imagery in the field of building inspection as well as disaster assessment is addressed. The main idea is to inspect a building from four cardinal directions by using monoplotting functionalities. The developed application allows to measure building height and distances and to digitize man-made structures, creating 3D surfaces and building models. The realized GUI is capable of identifying a building from several oblique points of views, as well as calculates the approximate height of buildings, ground distances and basic vectorization. The geometric accuracy of the results remains a function of several parameters, namely image resolution, quality of available parameters (DEM, calibration and orientation values), user expertise and measuring capability.

  12. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    PubMed

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Apparatus for obstacle traversion

    DOEpatents

    Borenstein, Johann

    2004-08-10

    An apparatus for traversing obstacles having an elongated, round, flexible body that includes a plurality of drive track assemblies. The plurality of drive track assemblies cooperate to provide forward propulsion wherever a propulsion member is in contact with any feature of the environment, regardless of how many or which ones of the plurality of drive track assemblies make contact with such environmental feature.

  14. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  15. Tracking with the mind's eye

    NASA Technical Reports Server (NTRS)

    Krauzlis, R. J.; Stone, L. S.

    1999-01-01

    The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.

  16. Force Sensitive Handles and Capacitive Touch Sensor for Driving a Flexible Haptic-Based Immersive System

    PubMed Central

    Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto

    2013-01-01

    In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape. PMID:24113680

  17. Force sensitive handles and capacitive touch sensor for driving a flexible haptic-based immersive system.

    PubMed

    Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto

    2013-10-09

    In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape.

  18. Alcohol and disorientation-related responses. IV, Effects of different alcohol dosages and display illumination tracking performance during vestibular stimulation.

    DOT National Transportation Integrated Search

    1971-07-01

    A previous CAMI laboratory investigation showed that alcohol impairs the ability of men to suppress vestibular nystagmus while visually fixating on a cockpit instrument, thus degrading visual tracking performance (eye-hand coordination) during angula...

  19. Uncovering Everyday Rhythms and Patterns: Food tracking and new forms of visibility and temporality in health care.

    PubMed

    Ruckenstein, Minna

    2015-01-01

    This chapter demonstrates how ethnographically-oriented research on emergent technologies, in this case self-tracking technologies, adds to Techno-Anthropology's aims of understanding techno-engagements and solving problems that deal with human-technology relations within and beyond health informatics. Everyday techno-relations have been a long-standing research interest in anthropology, underlining the necessity of empirical engagement with the ways in which people and technologies co-construct their daily conditions. By focusing on the uses of a food tracking application, MealLogger, designed for photographing meals and visualizing eating rhythms to share with health care professionals, the chapter details how personal data streams support and challenge health care practices. The interviewed professionals, from doctors to nutritionists, have used food tracking for treating patients with eating disorders, weight problems, and mental health issues. In general terms, self-tracking advances the practices of visually and temporally documenting, retrieving, communicating, and understanding physical and mental processes and, by doing so, it offers a new kind of visual mediation. The professionals point out how a visual food journal opens a window onto everyday life, bypassing customary ways of seeing and treating patients, thereby highlighting how self-tracking practices can aid in escaping the clinical gaze by promoting a new kind of communication through visualization and narration. Health care professionals are also, however, acutely aware of the barriers to adopting self-tracking practices as part of existing patient care. The health care system is neither used to, nor comfortable with, personal data that originates outside the system; it is not seen as evidence and its institutional position remains insecure.

  20. Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.

    PubMed

    Xiong, Chunshui; Huang, Lei; Liu, Changping

    2014-01-01

    Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.

  1. Dynamix: dynamic visualization by automatic selection of informative tracks from hundreds of genomic datasets.

    PubMed

    Monfort, Matthias; Furlong, Eileen E M; Girardot, Charles

    2017-07-15

    Visualization of genomic data is fundamental for gaining insights into genome function. Yet, co-visualization of a large number of datasets remains a challenge in all popular genome browsers and the development of new visualization methods is needed to improve the usability and user experience of genome browsers. We present Dynamix, a JBrowse plugin that enables the parallel inspection of hundreds of genomic datasets. Dynamix takes advantage of a priori knowledge to automatically display data tracks with signal within a genomic region of interest. As the user navigates through the genome, Dynamix automatically updates data tracks and limits all manual operations otherwise needed to adjust the data visible on screen. Dynamix also introduces a new carousel view that optimizes screen utilization by enabling users to independently scroll through groups of tracks. Dynamix is hosted at http://furlonglab.embl.de/Dynamix . charles.girardot@embl.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  2. LEA Detection and Tracking Method for Color-Independent Visual-MIMO

    PubMed Central

    Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo

    2016-01-01

    Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563

  3. LEA Detection and Tracking Method for Color-Independent Visual-MIMO.

    PubMed

    Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo

    2016-07-02

    Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.

  4. Selective visual scaling of time-scale processes facilitates broadband learning of isometric force frequency tracking.

    PubMed

    King, Adam C; Newell, Karl M

    2015-10-01

    The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.

  5. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  6. Introducing GV : The Spacecraft Geometry Visualizer

    NASA Astrophysics Data System (ADS)

    Throop, Henry B.; Stern, S. A.; Parker, J. W.; Gladstone, G. R.; Weaver, H. A.

    2009-12-01

    GV (Geometry Visualizer) is a web-based program for planning spacecraft observations. GV is the primary planning tool used by the New Horizons science team to plan the encounter with Pluto. GV creates accurate 3D images and movies showing the position of planets, satellites, and stars as seen from an observer on a spacecraft or other body. NAIF SPICE routines are used throughout for accurate calculations of all geometry. GV includes 3D geometry rendering of all planetary bodies, lon/lat grids, ground tracks, albedo maps, stellar magnitudes, types and positions from HD and Tycho-2 catalogs, and spacecraft FOVs. It generates still images, animations, and geometric data tables. GV is accessed through an easy-to-use and flexible web interface. The web-based interface allows for uniform use from any computer and assures that all users are accessing up-to-date versions of the code and kernel libraries. Compared with existing planning tools, GV is often simpler, faster, lower-cost, and more flexible. GV was developed at SwRI to support the New Horizons mission to Pluto. It has been subsequently expanded to support multiple other missions in flight or under development, including Cassini, Messenger, Rosetta, LRO, and Juno. The system can be used to plan Earth-based observations such as occultations to high precision, and was used by the public to help plan 'Kodak Moment' observations of the Pluto system from New Horizons. Potential users of GV may contact the author for more information. Development of GV has been funded by the New Horizons, Rosetta, and LRO missions.

  7. Probabilistic Modeling and Visualization of the Flexibility in Morphable Models

    NASA Astrophysics Data System (ADS)

    Lüthi, M.; Albrecht, T.; Vetter, T.

    Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.

  8. Delayed visual feedback affects both manual tracking and grip force control when transporting a handheld object.

    PubMed

    Sarlegna, Fabrice R; Baud-Bovy, Gabriel; Danion, Frédéric

    2010-08-01

    When we manipulate an object, grip force is adjusted in anticipation of the mechanical consequences of hand motion (i.e., load force) to prevent the object from slipping. This predictive behavior is assumed to rely on an internal representation of the object dynamic properties, which would be elaborated via visual information before the object is grasped and via somatosensory feedback once the object is grasped. Here we examined this view by investigating the effect of delayed visual feedback during dextrous object manipulation. Adult participants manually tracked a sinusoidal target by oscillating a handheld object whose current position was displayed as a cursor on a screen along with the visual target. A delay was introduced between actual object displacement and cursor motion. This delay was linearly increased (from 0 to 300 ms) and decreased within 2-min trials. As previously reported, delayed visual feedback altered performance in manual tracking. Importantly, although the physical properties of the object remained unchanged, delayed visual feedback altered the timing of grip force relative to load force by about 50 ms. Additional experiments showed that this effect was not due to task complexity nor to manual tracking. A model inspired by the behavior of mass-spring systems suggests that delayed visual feedback may have biased the representation of object dynamics. Overall, our findings support the idea that visual feedback of object motion can influence the predictive control of grip force even when the object is grasped.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Tao; Smallwood, Chuck R.; Bredeweg, Erin L.

    Modern live-cell imaging approaches permit real-time visualization of biological processes, yet limitations exist for unicellular organism isolation, culturing and long-term imaging that preclude fully understanding how cells sense and respond to environmental perturbations and the link between single-cell variability and whole-population dynamics. Here we present a microfluidic platform that provides fine control over the local environment with the capacity to replace media components at any experimental time point, and provides both perfused and compartmentalized cultivation conditions depending on the valve configuration. The functionality and flexibility of the platform were validated using both bacteria and yeast having different sizes, motility andmore » growth media. The demonstrated ability to track the growth and dynamics of both motile and non-motile prokaryotic and eukaryotic organisms emphasizes the versatility of the devices, which with further scale-up should enable studies in bioenergy and environmental research.« less

  10. Autonomous Rock Tracking and Acquisition from a Mars Rover

    NASA Technical Reports Server (NTRS)

    Maimone, Mark W.; Nesnas, Issa A.; Das, Hari

    1999-01-01

    Future Mars exploration missions will perform two types of experiments: science instrument placement for close-up measurement, and sample acquisition for return to Earth. In this paper we describe algorithms we developed for these tasks, and demonstrate them in field experiments using a self-contained Mars Rover prototype, the Rocky 7 rover. Our algorithms perform visual servoing on an elevation map instead of image features, because the latter are subject to abrupt scale changes during the approach. 'This allows us to compensate for the poor odometry that results from motion on loose terrain. We demonstrate the successful grasp of a 5 cm long rock over 1m away using 103-degree field-of-view stereo cameras, and placement of a flexible mast on a rock outcropping over 5m away using 43 degree FOV stereo cameras.

  11. Altered transfer of visual motion information to parietal association cortex in untreated first-episode psychosis: Implications for pursuit eye tracking

    PubMed Central

    Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.

    2011-01-01

    Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035

  12. Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma.

    PubMed

    Kasneci, Enkelejda; Black, Alex A; Wood, Joanne M

    2017-01-01

    To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior.

  13. Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma

    PubMed Central

    Black, Alex A.

    2017-01-01

    To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior. PMID:28293433

  14. Template construction grammar: from visual scene description to language comprehension and agrammatism.

    PubMed

    Barrès, Victor; Lee, Jinyong

    2014-01-01

    How does the language system coordinate with our visual system to yield flexible integration of linguistic, perceptual, and world-knowledge information when we communicate about the world we perceive? Schema theory is a computational framework that allows the simulation of perceptuo-motor coordination programs on the basis of known brain operating principles such as cooperative computation and distributed processing. We present first its application to a model of language production, SemRep/TCG, which combines a semantic representation of visual scenes (SemRep) with Template Construction Grammar (TCG) as a means to generate verbal descriptions of a scene from its associated SemRep graph. SemRep/TCG combines the neurocomputational framework of schema theory with the representational format of construction grammar in a model linking eye-tracking data to visual scene descriptions. We then offer a conceptual extension of TCG to include language comprehension and address data on the role of both world knowledge and grammatical semantics in the comprehension performances of agrammatic aphasic patients. This extension introduces a distinction between heavy and light semantics. The TCG model of language comprehension offers a computational framework to quantitatively analyze the distributed dynamics of language processes, focusing on the interactions between grammatical, world knowledge, and visual information. In particular, it reveals interesting implications for the understanding of the various patterns of comprehension performances of agrammatic aphasics measured using sentence-picture matching tasks. This new step in the life cycle of the model serves as a basis for exploring the specific challenges that neurolinguistic computational modeling poses to the neuroinformatics community.

  15. The semantic category-based grouping in the Multiple Identity Tracking task.

    PubMed

    Wei, Liuqing; Zhang, Xuemin; Li, Zhen; Liu, Jingyao

    2018-01-01

    In the Multiple Identity Tracking (MIT) task, categorical distinctions between targets and distractors have been found to facilitate tracking (Wei, Zhang, Lyu, & Li in Frontiers in Psychology, 7, 589, 2016). The purpose of this study was to further investigate the reasons for the facilitation effect, through six experiments. The results of Experiments 1-3 excluded the potential explanations of visual distinctiveness, attentional distribution strategy, and a working memory mechanism, respectively. When objects' visual information was preserved and categorical information was removed, the facilitation effect disappeared, suggesting that the visual distinctiveness between targets and distractors was not the main reason for the facilitation effect. Moreover, the facilitation effect was not the result of strategically shifting the attentional distribution, because the targets received more attention than the distractors in all conditions. Additionally, the facilitation effect did not come about because the identities of targets were encoded and stored in visual working memory to assist in the recovery from tracking errors; when working memory was disturbed by the object identities changing during tracking, the facilitation effect still existed. Experiments 4 and 5 showed that observers grouped targets together and segregated them from distractors on the basis of their categorical information. By doing this, observers could largely avoid distractor interference with tracking and improve tracking performance. Finally, Experiment 6 indicated that category-based grouping is not an automatic, but a goal-directed and effortful, strategy. In summary, the present findings show that a semantic category-based target-grouping mechanism exists in the MIT task, which is likely to be the major reason for the tracking facilitation effect.

  16. Discrete Resource Allocation in Visual Working Memory

    ERIC Educational Resources Information Center

    Barton, Brian; Ester, Edward F.; Awh, Edward

    2009-01-01

    Are resources in visual working memory allocated in a continuous or a discrete fashion? On one hand, flexible resource models suggest that capacity is determined by a central resource pool that can be flexibly divided such that items of greater complexity receive a larger share of resources. On the other hand, if capacity in working memory is…

  17. Flexibility and running economy in female collegiate track athletes.

    PubMed

    Beaudoin, C M; Whatley Blum, J

    2005-09-01

    Limited information exists regarding the association between flexibility and running economy in female athletes. This study examined relationships between lower limb and trunk flexibility and running economy in 17 female collegiate track athletes (20.12+/-1.80 y). Correlational design, subjects completed 4 testing sessions over a 2-week period. The 1st session assessed maximal oxygen uptake (VO2max=55.39+/-6.96 ml.kg-1.min-1). The 2nd session assessed trunk and lower limb flexibility. Two sets of 6 trunk and lower limb flexibility measures were performed after a 10-min treadmill warm-up at 2.68 m.s-1. The 3rd session consisted of 3 10-min accommodation runs at a speed of 2.68 m.s-1 which was approximately 60% VO2max. Each accommodation bout was separated by a 10-min rest. The 4th session assessed running economy. Subjects completed a 5-min warm-up at 2.68 m.s-1 followed by 10-min economy run at 2.68 m.s-1. Pearson product moment correlations revealed no significant correlations between running economy and flexibility measures. Results are in contrast to studies demonstrating an inverse relationship between trunk and/or lower limb flexibility and running economy in males. Furthermore, results are in contrast to studies reporting positive relationships between flexibility and running economy.

  18. A simple and rapid method for high-resolution visualization of single-ion tracks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omichi, Masaaki; Center for Collaborative Research, Anan National College of Technology, Anan, Tokushima 774-0017; Choi, Wookjin

    2014-11-15

    Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA)-N, N’-methylene bisacrylamide (MBAAm) blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic forcemore » microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.« less

  19. The role of vestibular and support-tactile-proprioceptive inputs in visual-manual tracking

    NASA Astrophysics Data System (ADS)

    Kornilova, Ludmila; Naumov, Ivan; Glukhikh, Dmitriy; Khabarova, Ekaterina; Pavlova, Aleksandra; Ekimovskiy, Georgiy; Sagalovitch, Viktor; Smirnov, Yuriy; Kozlovskaya, Inesa

    Sensorimotor disorders in weightlessness are caused by changes of functioning of gravity-dependent systems, first of all - vestibular and support. The question arises, what’s the role and the specific contribution of the support afferentation in the development of observed disorders. To determine the role and effects of vestibular, support, tactile and proprioceptive afferentation on characteristics of visual-manual tracking (VMT) we conducted a comparative analysis of the data obtained after prolonged spaceflight and in a model of weightlessness - horizontal “dry” immersion. Altogether we examined 16 Russian cosmonauts before and after prolonged spaceflights (129-215 days) and 30 subjects who stayed in immersion bath for 5-7 days to evaluate the state of the vestibular function (VF) using videooculography and characteristics of the visual-manual tracking (VMT) using electrooculography & joystick with biological visual feedback. Evaluation of the VF has shown that both after immersion and after prolonged spaceflight there were significant decrease of the static torsional otolith-cervical-ocular reflex (OCOR) and simultaneous significant increase of the dynamic vestibular-cervical-ocular reactions (VCOR) with a revealed negative correlation between parameters of the otoliths and canals reactions, as well as significant changes in accuracy of perception of the subjective visual vertical which correlated with changes in OCOR. Analyze of the VMT has shown that significant disorders of the visual tracking (VT) occurred from the beginning of the immersion up to 3-4 day after while in cosmonauts similar but much more pronounced oculomotor disorders and significant changes from the baseline were observed up to R+9 day postflight. Significant changes of the manual tracking (MT) were revealed only for gain and occurred on 1 and 3 days in immersion while after spaceflight such changes were observed up to R+5 day postflight. We found correlation between characteristics of the VT and MT, between characteristics of the VF and VT and no correlation between VF and MT. It was found that removal of the support and minimization of the proprioceptive afferentation has a greater impact upon accuracy of the VT then accuracy of the MT. Hand tracking accuracy was higher than the eyes for all subjects. The hand’ motor coordination was more stable to changes in support-proprioceptive afferentation then visual tracking. The observed changes in and after immersion are similar but less pronounced with changes observed on cosmonauts after prolonged spaceflight. Keywords: visual-manual tracking, vestibular function, weightlessness, immersion.

  20. Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features

    NASA Astrophysics Data System (ADS)

    Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique

    2011-12-01

    We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.

  1. 49 CFR 173.37 - Hazardous Materials in Flexible Bulk Containers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... an external visual inspection by the person filling the Flexible Bulk Container to ensure: (1) The... transported in cargo transport units when offered for transportation by vessel. (7) Flexible Bulk Containers... 49 Transportation 2 2013-10-01 2013-10-01 false Hazardous Materials in Flexible Bulk Containers...

  2. 49 CFR 173.37 - Hazardous Materials in Flexible Bulk Containers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... an external visual inspection by the person filling the Flexible Bulk Container to ensure: (1) The... transported in cargo transport units when offered for transportation by vessel. (7) Flexible Bulk Containers... 49 Transportation 2 2014-10-01 2014-10-01 false Hazardous Materials in Flexible Bulk Containers...

  3. TrackMate: An open and extensible platform for single-particle tracking.

    PubMed

    Tinevez, Jean-Yves; Perry, Nick; Schindelin, Johannes; Hoopes, Genevieve M; Reynolds, Gregory D; Laplantine, Emmanuel; Bednarek, Sebastian Y; Shorte, Spencer L; Eliceiri, Kevin W

    2017-02-15

    We present TrackMate, an open source Fiji plugin for the automated, semi-automated, and manual tracking of single-particles. It offers a versatile and modular solution that works out of the box for end users, through a simple and intuitive user interface. It is also easily scriptable and adaptable, operating equally well on 1D over time, 2D over time, 3D over time, or other single and multi-channel image variants. TrackMate provides several visualization and analysis tools that aid in assessing the relevance of results. The utility of TrackMate is further enhanced through its ability to be readily customized to meet specific tracking problems. TrackMate is an extensible platform where developers can easily write their own detection, particle linking, visualization or analysis algorithms within the TrackMate environment. This evolving framework provides researchers with the opportunity to quickly develop and optimize new algorithms based on existing TrackMate modules without the need of having to write de novo user interfaces, including visualization, analysis and exporting tools. The current capabilities of TrackMate are presented in the context of three different biological problems. First, we perform Caenorhabditis-elegans lineage analysis to assess how light-induced damage during imaging impairs its early development. Our TrackMate-based lineage analysis indicates the lack of a cell-specific light-sensitive mechanism. Second, we investigate the recruitment of NEMO (NF-κB essential modulator) clusters in fibroblasts after stimulation by the cytokine IL-1 and show that photodamage can generate artifacts in the shape of TrackMate characterized movements that confuse motility analysis. Finally, we validate the use of TrackMate for quantitative lifetime analysis of clathrin-mediated endocytosis in plant cells. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  4. Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots.

    PubMed

    Chen, Jian; Jia, Bingxi; Zhang, Kaixiang

    2017-11-01

    In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.

  5. Real-time high-level video understanding using data warehouse

    NASA Astrophysics Data System (ADS)

    Lienard, Bruno; Desurmont, Xavier; Barrie, Bertrand; Delaigle, Jean-Francois

    2006-02-01

    High-level Video content analysis such as video-surveillance is often limited by computational aspects of automatic image understanding, i.e. it requires huge computing resources for reasoning processes like categorization and huge amount of data to represent knowledge of objects, scenarios and other models. This article explains how to design and develop a "near real-time adaptive image datamart", used, as a decisional support system for vision algorithms, and then as a mass storage system. Using RDF specification as storing format of vision algorithms meta-data, we can optimise the data warehouse concepts for video analysis, add some processes able to adapt the current model and pre-process data to speed-up queries. In this way, when new data is sent from a sensor to the data warehouse for long term storage, using remote procedure call embedded in object-oriented interfaces to simplified queries, they are processed and in memory data-model is updated. After some processing, possible interpretations of this data can be returned back to the sensor. To demonstrate this new approach, we will present typical scenarios applied to this architecture such as people tracking and events detection in a multi-camera network. Finally we will show how this system becomes a high-semantic data container for external data-mining.

  6. The mechanics and behavior of cliff swallows during tandem flights.

    PubMed

    Shelton, Ryan M; Jackson, Brandon E; Hedrick, Tyson L

    2014-08-01

    Cliff swallows (Petrochelidon pyrrhonota) are highly maneuverable social birds that often forage and fly in large open spaces. Here we used multi-camera videography to measure the three-dimensional kinematics of their natural flight maneuvers in the field. Specifically, we collected data on tandem flights, defined as two birds maneuvering together. These data permit us to evaluate several hypotheses on the high-speed maneuvering flight performance of birds. We found that high-speed turns are roll-based, but that the magnitude of the centripetal force created in typical maneuvers varied only slightly with flight speed, typically reaching a peak of ~2 body weights. Turning maneuvers typically involved active flapping rather than gliding. In tandem flights the following bird copied the flight path and wingbeat frequency (~12.3 Hz) of the lead bird while maintaining position slightly above the leader. The lead bird turned in a direction away from the lateral position of the following bird 65% of the time on average. Tandem flights vary widely in instantaneous speed (1.0 to 15.6 m s(-1)) and duration (0.72 to 4.71 s), and no single tracking strategy appeared to explain the course taken by the following bird. © 2014. Published by The Company of Biologists Ltd.

  7. Individual variability in behavioral flexibility predicts sign-tracking tendency

    PubMed Central

    Nasser, Helen M.; Chen, Yu-Wei; Fiscella, Kimberly; Calu, Donna J.

    2015-01-01

    Sign-tracking rats show heightened sensitivity to food- and drug-associated cues, which serve as strong incentives for driving reward seeking. We hypothesized that this enhanced incentive drive is accompanied by an inflexibility when incentive value changes. To examine this we tested rats in Pavlovian outcome devaluation or second-order conditioning prior to the assessment of sign-tracking tendency. To assess behavioral flexibility we trained rats to associate a light with a food outcome. After the food was devalued by pairing with illness, we measured conditioned responding (CR) to the light during an outcome devaluation probe test. The level of CR during outcome devaluation probe test correlated with the rats' subsequent tracking tendency, with sign-tracking rats failing to suppress CR to the light after outcome devaluation. To assess Pavlovian incentive learning, we trained rats on first-order (CS+, CS−) and second-order (SOCS+, SOCS−) discriminations. After second-order conditioning, we measured CR to the second-order cues during a probe test. Second-order conditioning was observed across all rats regardless of tracking tendency. The behavioral inflexibility of sign-trackers has potential relevance for understanding individual variation in vulnerability to drug addiction. PMID:26578917

  8. Weighted feature selection criteria for visual servoing of a telerobot

    NASA Technical Reports Server (NTRS)

    Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.

    1989-01-01

    Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.

  9. Learned filters for object detection in multi-object visual tracking

    NASA Astrophysics Data System (ADS)

    Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David

    2016-05-01

    We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.

  10. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    NASA Astrophysics Data System (ADS)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic errors were modeled by analyzing residuals using correction grid. The results of the final bundle adjustments are sufficient to enable Sanborn to produce DEM/DTM and orthophotos from the nadir imagery and create 3D models using georeferenced oblique imagery.

  11. WEB-IS2: Next Generation Web Services Using Amira Visualization Package

    NASA Astrophysics Data System (ADS)

    Yang, X.; Wang, Y.; Bollig, E. F.; Kadlec, B. J.; Garbow, Z. A.; Yuen, D. A.; Erlebacher, G.

    2003-12-01

    Amira (www.amiravis.com) is a powerful 3-D visualization package and has been employed recently by the science and engineering communities to gain insight into their data. We present a new web-based interface to Amira, packaged in a Java applet. We have developed a module called WEB-IS/Amira (WEB-IS2), which provides web-based access to Amira. This tool allows earth scientists to manipulate Amira controls remotely and to analyze, render and view large datasets over the internet, without regard for time or location. This could have important ramifications for GRID computing. The design of our implementation will soon allow multiple users to visually collaborate by manipulating a single dataset through a variety of client devices. These clients will only require a browser capable of displaying Java applets. As the deluge of data continues, innovative solutions that maximize ease of use without sacrificing efficiency or flexibility will continue to gain in importance, particularly in the Earth sciences. Major initiatives, such as Earthscope (http://www.earthscope.org), which will generate at least a terabyte of data daily, stand to profit enormously by a system such as WEB-IS/Amira (WEB-IS2). We discuss our use of SOAP (Livingston, D., Advanced SOAP for Web development, Prentice Hall, 2002), a novel 2-way communication protocol, as a means of providing remote commands, and efficient point-to-point transfer of binary image data. We will present our initial experiences with the use of Naradabrokering (www.naradabrokering.org) as a means to decouple clients and servers. Information is submitted to the system as a published item, while it is retrieved through a subscription mechanisms, via what is known as "topics". These topic headers, their contents, and the list of subscribers are automatically tracked by Naradabrokering. This novel approach promises a high degree of fault tolerance, flexibility with respect to client diversity, and language independence for the services (Erlebacher, G., Yuen, D.A., and F. Dubuffet, Current trends and demands in visualization in the geosciences, Electron. Geosciences, 4, 2001).

  12. Visual tracking of da Vinci instruments for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kuhn, E.; Bodenstedt, S.; Röhl, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2014-03-01

    Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.

  13. An UAV scheduling and planning method for post-disaster survey

    NASA Astrophysics Data System (ADS)

    Li, G. Q.; Zhou, X. G.; Yin, J.; Xiao, Q. Y.

    2014-11-01

    Annually, the extreme climate and special geological environments lead to frequent natural disasters, e.g., earthquakes, floods, etc. The disasters often bring serious casualties and enormous economic losses. Post-disaster surveying is very important for disaster relief and assessment. As the Unmanned Aerial Vehicle (UAV) remote sensing with the advantage of high efficiency, high precision, high flexibility, and low cost, it is widely used in emergency surveying in recent years. As the UAVs used in emergency surveying cannot stop and wait for the happening of the disaster, when the disaster happens the UAVs usually are working at everywhere. In order to improve the emergency surveying efficiency, it is needed to track the UAVs and assign the emergency surveying task for each selected UAV. Therefore, a UAV tracking and scheduling method for post-disaster survey is presented in this paper. In this method, Global Positioning System (GPS), and GSM network are used to track the UAVs; an emergency tracking UAV information database is built in advance by registration, the database at least includes the following information, e.g., the ID of the UAVs, the communication number of the UAVs; when catastrophe happens, the real time location of all UAVs in the database will be gotten using emergency tracking method at first, then the traffic cost time for all UAVs to the disaster region will be calculated based on the UAVs' the real time location and the road network using the nearest services analysis algorithm; the disaster region is subdivided to several emergency surveying regions based on DEM, area, and the population distribution map; the emergency surveying regions are assigned to the appropriated UAV according to shortest cost time rule. The UAVs tracking and scheduling prototype is implemented using SQLServer2008, ArcEnginge 10.1 SDK, Visual Studio 2010 C#, Android, SMS Modem, and Google Maps API.

  14. Non-iterative volumetric particle reconstruction near moving bodies

    NASA Astrophysics Data System (ADS)

    Mendelson, Leah; Techet, Alexandra

    2017-11-01

    When multi-camera 3D PIV experiments are performed around a moving body, the body often obscures visibility of regions of interest in the flow field in a subset of cameras. We evaluate the performance of non-iterative particle reconstruction algorithms used for synthetic aperture PIV (SAPIV) in these partially-occluded regions. We show that when partial occlusions are present, the quality and availability of 3D tracer particle information depends on the number of cameras and reconstruction procedure used. Based on these findings, we introduce an improved non-iterative reconstruction routine for SAPIV around bodies. The reconstruction procedure combines binary masks, already required for reconstruction of the body's 3D visual hull, and a minimum line-of-sight algorithm. This approach accounts for partial occlusions without performing separate processing for each possible subset of cameras. We combine this reconstruction procedure with three-dimensional imaging on both sides of the free surface to reveal multi-fin wake interactions generated by a jumping archer fish. Sufficient particle reconstruction in near-body regions is crucial to resolving the wake structures of upstream fins (i.e., dorsal and anal fins) before and during interactions with the caudal tail.

  15. Tracker Toolkit

    NASA Technical Reports Server (NTRS)

    Lewis, Steven J.; Palacios, David M.

    2013-01-01

    This software can track multiple moving objects within a video stream simultaneously, use visual features to aid in the tracking, and initiate tracks based on object detection in a subregion. A simple programmatic interface allows plugging into larger image chain modeling suites. It extracts unique visual features for aid in tracking and later analysis, and includes sub-functionality for extracting visual features about an object identified within an image frame. Tracker Toolkit utilizes a feature extraction algorithm to tag each object with metadata features about its size, shape, color, and movement. Its functionality is independent of the scale of objects within a scene. The only assumption made on the tracked objects is that they move. There are no constraints on size within the scene, shape, or type of movement. The Tracker Toolkit is also capable of following an arbitrary number of objects in the same scene, identifying and propagating the track of each object from frame to frame. Target objects may be specified for tracking beforehand, or may be dynamically discovered within a tripwire region. Initialization of the Tracker Toolkit algorithm includes two steps: Initializing the data structures for tracked target objects, including targets preselected for tracking; and initializing the tripwire region. If no tripwire region is desired, this step is skipped. The tripwire region is an area within the frames that is always checked for new objects, and all new objects discovered within the region will be tracked until lost (by leaving the frame, stopping, or blending in to the background).

  16. Animation of multi-flexible body systems and its use in control system design

    NASA Technical Reports Server (NTRS)

    Juengst, Carl; Stahlberg, Ron

    1993-01-01

    Animation can greatly assist the structural dynamicist and control system analyst with better understanding of how multi-flexible body systems behave. For multi-flexible body systems, the structural characteristics (mode frequencies, mode shapes, and damping) change, sometimes dramatically with large angles of rotation between bodies. With computer animation, the analyst can visualize these changes and how the system responds to active control forces and torques. A characterization of the type of system we wish to animate is presented. The lack of clear understanding of the above effects was a key element leading to the development of a multi-flexible body animation software package. The resulting animation software is described in some detail here, followed by its application to the control system analyst. Other applications of this software can be determined on an individual need basis. A number of software products are currently available that make the high-speed rendering of rigid body mechanical system simulation possible. However, such options are not available for use in rendering flexible body mechanical system simulations. The desire for a high-speed flexible body visualization tool led to the development of the Flexible Or Rigid Mechanical System (FORMS) software. This software was developed at the Center for Simulation and Design Optimization of Mechanical Systems at the University of Iowa. FORMS provides interactive high-speed rendering of flexible and/or rigid body mechanical system simulations, and combines geometry and motion information to produce animated output. FORMS is designed to be both portable and flexible, and supports a number of different user interfaces and graphical display devices. Additional features have been added to FORMS that allow special visualization results related to the nature of the flexible body geometric representations.

  17. Flexibility and Coordination among Acts of Visualization and Analysis in a Pattern Generalization Activity

    ERIC Educational Resources Information Center

    Nilsson, Per; Juter, Kristina

    2011-01-01

    This study aims at exploring processes of flexibility and coordination among acts of visualization and analysis in students' attempt to reach a general formula for a three-dimensional pattern generalizing task. The investigation draws on a case-study analysis of two 15-year-old girls working together on a task in which they are asked to calculate…

  18. Maternal play behaviors, child negativity, and preterm or low birthweight toddlers' visual-spatial outcomes: testing a differential susceptibility hypothesis.

    PubMed

    Dilworth-Bart, Janean E; Miller, Kyle E; Hane, Amanda

    2012-04-01

    We examined the joint roles of child negative emotionality and parenting in the visual-spatial development of toddlers born preterm or with low birthweights (PTLBW). Neonatal risk data were collected at hospital discharge, observer- and parent-rated child negative emotionality was assessed at 9-months postterm, and mother-initiated task changes and flexibility during play were observed during a dyadic play interaction at 16-months postterm. Abbreviated IQ scores, and verbal/nonverbal and visual-spatial processing data were collected at 24-months postterm. Hierarchical regression analyses did not support our hypothesis that the visual-spatial processing of PTLBW toddlers with higher negative emotionality would be differentially susceptible to parenting behaviors during play. Instead, observer-rated distress and a negativity composite score were associated with less optimal visual-spatial processing when mothers were more flexible during the 16-month play interaction. Mother-initiated task changes did not interact with any of the negative emotionality variables to predict any of the 24-month neurocognitive outcomes, nor did maternal flexibility interact with mother-rated difficult temperament to predict the visual-spatial processing outcomes. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Comparison of Predictable Smooth Ocular and Combined Eye-Head Tracking Behaviour in Patients with Lesions Affecting the Brainstem and Cerebellum

    NASA Technical Reports Server (NTRS)

    Grant, Michael P.; Leigh, R. John; Seidman, Scott H.; Riley, David E.; Hanna, Joseph P.

    1992-01-01

    We compared the ability of eight normal subjects and 15 patients with brainstem or cerebellar disease to follow a moving visual stimulus smoothly with either the eyes alone or with combined eye-head tracking. The visual stimulus was either a laser spot (horizontal and vertical planes) or a large rotating disc (torsional plane), which moved at one sinusoidal frequency for each subject. The visually enhanced Vestibulo-Ocular Reflex (VOR) was also measured in each plane. In the horizontal and vertical planes, we found that if tracking gain (gaze velocity/target velocity) for smooth pursuit was close to 1, the gain of combined eye-hand tracking was similar. If the tracking gain during smooth pursuit was less than about 0.7, combined eye-head tracking was usually superior. Most patients, irrespective of diagnosis, showed combined eye-head tracking that was superior to smooth pursuit; only two patients showed the converse. In the torsional plane, in which optokinetic responses were weak, combined eye-head tracking was much superior, and this was the case in both subjects and patients. We found that a linear model, in which an internal ocular tracking signal cancelled the VOR, could account for our findings in most normal subjects in the horizontal and vertical planes, but not in the torsional plane. The model failed to account for tracking behaviour in most patients in any plane, and suggested that the brain may use additional mechanisms to reduce the internal gain of the VOR during combined eye-head tracking. Our results confirm that certain patients who show impairment of smooth-pursuit eye movements preserve their ability to smoothly track a moving target with combined eye-head tracking.

  20. Measurement of Flexed Posture for Flexible Mono-Tread Mobile Track

    NASA Astrophysics Data System (ADS)

    Kinugasa, Tetsuya; Akagi, Tetsuya; Ishii, Kuniaki; Haji, Takafumi; Yoshida, Koji; Amano, Hisanori; Hayashi, Ryota; Tokuda, Kenichi; Iribe, Masatsugu; Osuka, Koichi

    We have proposed Flexible Mono-tread mobile Track (FMT) as a mobile mechanism on rough terrain for rescue activity, environmental investigation and planetary explorer, etc. Generally speaking, one has to teleoperate robots under invisible condition. In order to operate the robots skillfully, it is necessary to detect not only condition around the robots and its position but also posture of the robots at any time. Since flexed posture of FMT decides turning radius and direction, it is important to know its posture. FMT has vertebral structure composed of vertebrae as rigid body and intervertebral disks made by flexible devices such as rubber cylinder and spring. Since the intervertebral disks flex in three dimension, traditional sensors such as potentiometers, rotary encoders and range finders can hardly use for measurement of its deformation. The purpose of the paper, therefore, is to measure flexed posture of FMT using a novel flexible displacement sensor. We prove that the flexed posture of FMT with five intervertebral disks can be detected through experiment.

  1. INDIRECT INTELLIGENT SLIDING MODE CONTROL OF A SHAPE MEMORY ALLOY ACTUATED FLEXIBLE BEAM USING HYSTERETIC RECURRENT NEURAL NETWORKS.

    PubMed

    Hannen, Jennifer C; Crews, John H; Buckner, Gregory D

    2012-08-01

    This paper introduces an indirect intelligent sliding mode controller (IISMC) for shape memory alloy (SMA) actuators, specifically a flexible beam deflected by a single offset SMA tendon. The controller manipulates applied voltage, which alters SMA tendon temperature to track reference bending angles. A hysteretic recurrent neural network (HRNN) captures the nonlinear, hysteretic relationship between SMA temperature and bending angle. The variable structure control strategy provides robustness to model uncertainties and parameter variations, while effectively compensating for system nonlinearities, achieving superior tracking compared to an optimized PI controller.

  2. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians.

    PubMed

    Clayton, Kameron K; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D; Kidd, Gerald

    2016-01-01

    The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, "cocktail-party" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the "cocktail party problem".

  3. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians

    PubMed Central

    Clayton, Kameron K.; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D.; Kidd, Gerald

    2016-01-01

    The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, “cocktail-party” like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the “cocktail party problem”. PMID:27384330

  4. Flexible integration of robotics, ultrasonics and metrology for the inspection of aerospace components

    NASA Astrophysics Data System (ADS)

    Mineo, Carmelo; MacLeod, Charles; Morozov, Maxim; Pierce, S. Gareth; Summan, Rahul; Rodden, Tony; Kahani, Danial; Powell, Jonathan; McCubbin, Paul; McCubbin, Coreen; Munro, Gavin; Paton, Scott; Watson, David

    2017-02-01

    Improvements in performance of modern robotic manipulators have in recent years allowed research aimed at development of fast automated non-destructive testing (NDT) of complex geometries. Contemporary robots are well adaptable to new tasks. Several robotic inspection prototype systems and a number of commercial products have been developed worldwide. This paper describes the latest progress in research focused at large composite aerospace components. A multi-robot flexible inspection cell is used to take the fundamental research and the feasibility studies to higher technology readiness levels, all set for the future industrial exploitation. The robot cell is equipped with high accuracy and high payload robots, mounted on 7 meter tracks, and an external rotary axis. A robotically delivered photogrammetry technique is first used to assess the position of the components placed within the robot working envelope and their deviation to CAD. Offline programming is used to generate a scan path for phased array ultrasonic testing (PAUT). PAUT is performed using a conformable wheel probe, with high data rate acquisition from PAUT controller. Real-time robot path-correction, based on force-torque control (FTC), is deployed to achieve the optimum ultrasonic coupling and repeatable data quality. New communication software is developed that enabled simultaneous control of the multiple robots performing different tasks and the acquisition of accurate positional data. All aspects of the system are controlled through a purposely developed graphic user interface that enables the flexible use of the unique set of hardware resources, the data acquisition, visualization and analysis.

  5. Attentional Resources in Visual Tracking through Occlusion: The High-Beams Effect

    ERIC Educational Resources Information Center

    Flombaum, Jonathan I.; Scholl, Brian J.; Pylyshyn, Zenon W.

    2008-01-01

    A considerable amount of research has uncovered heuristics that the visual system employs to keep track of objects through periods of occlusion. Relatively little work, by comparison, has investigated the online resources that support this processing. We explored how attention is distributed when featurally identical objects become occluded during…

  6. Combined Atropine and 2-PAM Cl Effects on Tracking Performance and Visual, Physiological, and Psychological Functions

    DTIC Science & Technology

    1988-12-01

    tracking task reveals the magnitude Akitrihm. Spare. and Environmental Medicine • December. I$ II I ANTIDOTE EFFECTS--PEN ETAR ET AL. and duration of the... marihuana on dynamic visual acu- blood pressure following the combination of 2-PAM Cl ity: I. Threshold measurements. Perception Psychophys. 1975

  7. Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing

    NASA Astrophysics Data System (ADS)

    Ou, Meiying; Li, Shihua; Wang, Chaoli

    2013-12-01

    This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.

  8. Human Mobility Monitoring in Very Low Resolution Visual Sensor Network

    PubMed Central

    Bo Bo, Nyan; Deboeverie, Francis; Eldib, Mohamed; Guan, Junzhi; Xie, Xingzhe; Niño, Jorge; Van Haerenborgh, Dirk; Slembrouck, Maarten; Van de Velde, Samuel; Steendam, Heidi; Veelaert, Peter; Kleihorst, Richard; Aghajan, Hamid; Philips, Wilfried

    2014-01-01

    This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 × 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics. PMID:25375754

  9. Automation trust and attention allocation in multitasking workspace.

    PubMed

    Karpinsky, Nicole D; Chancey, Eric T; Palmer, Dakota B; Yamani, Yusuke

    2018-07-01

    Previous research suggests that operators with high workload can distrust and then poorly monitor automation, which has been generally inferred from automation dependence behaviors. To test automation monitoring more directly, the current study measured operators' visual attention allocation, workload, and trust toward imperfect automation in a dynamic multitasking environment. Participants concurrently performed a manual tracking task with two levels of difficulty and a system monitoring task assisted by an unreliable signaling system. Eye movement data indicate that operators allocate less visual attention to monitor automation when the tracking task is more difficult. Participants reported reduced levels of trust toward the signaling system when the tracking task demanded more focused visual attention. Analyses revealed that trust mediated the relationship between the load of the tracking task and attention allocation in Experiment 1, an effect that was not replicated in Experiment 2. Results imply a complex process underlying task load, visual attention allocation, and automation trust during multitasking. Automation designers should consider operators' task load in multitasking workspaces to avoid reduced automation monitoring and distrust toward imperfect signaling systems. Copyright © 2018. Published by Elsevier Ltd.

  10. Visual tracking using neuromorphic asynchronous event-based cameras.

    PubMed

    Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad

    2015-04-01

    This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.

  11. Visualization of medical data based on EHR standards.

    PubMed

    Kopanitsa, G; Hildebrand, C; Stausberg, J; Englmeier, K H

    2013-01-01

    To organize an efficient interaction between a doctor and an EHR the data has to be presented in the most convenient way. Medical data presentation methods and models must be flexible in order to cover the needs of the users with different backgrounds and requirements. Most visualization methods are doctor oriented, however, there are indications that the involvement of patients can optimize healthcare. The research aims at specifying the state of the art of medical data visualization. The paper analyzes a number of projects and defines requirements for a generic ISO 13606 based data visualization method. In order to do so it starts with a systematic search for studies on EHR user interfaces. In order to identify best practices visualization methods were evaluated according to the following criteria: limits of application, customizability, re-usability. The visualization methods were compared by using specified criteria. The review showed that the analyzed projects can contribute knowledge to the development of a generic visualization method. However, none of them proposed a model that meets all the necessary criteria for a re-usable standard based visualization method. The shortcomings were mostly related to the structure of current medical concept specifications. The analysis showed that medical data visualization methods use hardcoded GUI, which gives little flexibility. So medical data visualization has to turn from a hardcoded user interface to generic methods. This requires a great effort because current standards are not suitable for organizing the management of visualization data. This contradiction between a generic method and a flexible and user-friendly data layout has to be overcome.

  12. Measurement of electromagnetic tracking error in a navigated breast surgery setup

    NASA Astrophysics Data System (ADS)

    Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor

    2016-03-01

    PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.

  13. Mark Tracking: Position/orientation measurements using 4-circle mark and its tracking experiments

    NASA Technical Reports Server (NTRS)

    Kanda, Shinji; Okabayashi, Keijyu; Maruyama, Tsugito; Uchiyama, Takashi

    1994-01-01

    Future space robots require position and orientation tracking with visual feedback control to track and capture floating objects and satellites. We developed a four-circle mark that is useful for this purpose. With this mark, four geometric center positions as feature points can be extracted from the mark by simple image processing. We also developed a position and orientation measurement method that uses the four feature points in our mark. The mark gave good enough image measurement accuracy to let space robots approach and contact objects. A visual feedback control system using this mark enabled a robot arm to track a target object accurately. The control system was able to tolerate a time delay of 2 seconds.

  14. The role of vision in odor-plume tracking by walking and flying insects.

    PubMed

    Willis, Mark A; Avondet, Jennifer L; Zheng, Elizabeth

    2011-12-15

    The walking paths of male cockroaches, Periplaneta americana, tracking point-source plumes of female pheromone often appear similar in structure to those observed from flying male moths. Flying moths use visual-flow-field feedback of their movements to control steering and speed over the ground and to detect the wind speed and direction while tracking plumes of odors. Walking insects are also known to use flow field cues to steer their trajectories. Can the upwind steering we observe in plume-tracking walking male cockroaches be explained by visual-flow-field feedback, as in flying moths? To answer this question, we experimentally occluded the compound eyes and ocelli of virgin P. americana males, separately and in combination, and challenged them with different wind and odor environments in our laboratory wind tunnel. They were observed responding to: (1) still air and no odor, (2) wind and no odor, (3) a wind-borne point-source pheromone plume and (4) a wide pheromone plume in wind. If walking cockroaches require visual cues to control their steering with respect to their environment, we would expect their tracks to be less directed and more variable if they cannot see. Instead, we found few statistically significant differences among behaviors exhibited by intact control cockroaches or those with their eyes occluded, under any of our environmental conditions. Working towards our goal of a comprehensive understanding of chemo-orientation in insects, we then challenged flying and walking male moths to track pheromone plumes with and without visual feedback. Neither walking nor flying moths performed as well as walking cockroaches when there was no visual information available.

  15. The role of vision in odor-plume tracking by walking and flying insects

    PubMed Central

    Willis, Mark A.; Avondet, Jennifer L.; Zheng, Elizabeth

    2011-01-01

    SUMMARY The walking paths of male cockroaches, Periplaneta americana, tracking point-source plumes of female pheromone often appear similar in structure to those observed from flying male moths. Flying moths use visual-flow-field feedback of their movements to control steering and speed over the ground and to detect the wind speed and direction while tracking plumes of odors. Walking insects are also known to use flow field cues to steer their trajectories. Can the upwind steering we observe in plume-tracking walking male cockroaches be explained by visual-flow-field feedback, as in flying moths? To answer this question, we experimentally occluded the compound eyes and ocelli of virgin P. americana males, separately and in combination, and challenged them with different wind and odor environments in our laboratory wind tunnel. They were observed responding to: (1) still air and no odor, (2) wind and no odor, (3) a wind-borne point-source pheromone plume and (4) a wide pheromone plume in wind. If walking cockroaches require visual cues to control their steering with respect to their environment, we would expect their tracks to be less directed and more variable if they cannot see. Instead, we found few statistically significant differences among behaviors exhibited by intact control cockroaches or those with their eyes occluded, under any of our environmental conditions. Working towards our goal of a comprehensive understanding of chemo-orientation in insects, we then challenged flying and walking male moths to track pheromone plumes with and without visual feedback. Neither walking nor flying moths performed as well as walking cockroaches when there was no visual information available. PMID:22116754

  16. Exploring mobility & workplace choice in a flexible office through post-occupancy evaluation.

    PubMed

    Göçer, Özgür; Göçer, Kenan; Ergöz Karahan, Ebru; İlhan Oygür, Işıl

    2018-02-01

    Developments in information and communication systems, organisational structure and the nature of work have contributed to the restructuring of work environments. In these new types of work environments, employees do not have assigned workplaces. This arrangement helps organisations to minimise rent costs and increase employee interaction and knowledge exchange through mobility. This post-occupancy evaluation (POE) study focuses on a flexible office in a Gold Leadership in Energy and Environmental Design-certified building in Istanbul. An integrated qualitative and quantitative POE technique with occupancy tracking via barcode scanning and instant surveying has been introduced. Using this unique approach, we examined the directives/drivers in workplace choice and mobility from different perspectives. The aggregated data was used to discern work-related consequences such as flexibility, workplace choice, work and indoor environment satisfaction, place attachment and identity. The results show that employees who have a conventional working culture develop a new working style: 'fixed-flexible working'. Practitioner Summary: This paper introduces a new POE approach for flexible offices based on occupancy tracking through barcode scanning to explore workplace choice and mobility. More than half (52.1%) of the participants have tended to choose the same desk every day. However, the satisfaction level of the 'mobile' employees was higher than that of the 'fixed flexible' employees.

  17. Dynamic modeling and hierarchical compound control of a novel 2-DOF flexible parallel manipulator with multiple actuation modes

    NASA Astrophysics Data System (ADS)

    Liang, Dong; Song, Yimin; Sun, Tao; Jin, Xueying

    2018-03-01

    This paper addresses the problem of rigid-flexible coupling dynamic modeling and active control of a novel flexible parallel manipulator (PM) with multiple actuation modes. Firstly, based on the flexible multi-body dynamics theory, the rigid-flexible coupling dynamic model (RFDM) of system is developed by virtue of the augmented Lagrangian multipliers approach. For completeness, the mathematical models of permanent magnet synchronous motor (PMSM) and piezoelectric transducer (PZT) are further established and integrated with the RFDM of mechanical system to formulate the electromechanical coupling dynamic model (ECDM). To achieve the trajectory tracking and vibration suppression, a hierarchical compound control strategy is presented. Within this control strategy, the proportional-differential (PD) feedback controller is employed to realize the trajectory tracking of end-effector, while the strain and strain rate feedback (SSRF) controller is developed to restrain the vibration of the flexible links using PZT. Furthermore, the stability of the control algorithm is demonstrated based on the Lyapunov stability theory. Finally, two simulation case studies are performed to illustrate the effectiveness of the proposed approach. The results indicate that, under the redundant actuation mode, the hierarchical compound control strategy can guarantee the flexible PM achieves singularity-free motion and vibration attenuation within task workspace simultaneously. The systematic methodology proposed in this study can be conveniently extended for the dynamic modeling and efficient controller design of other flexible PMs, especially the emerging ones with multiple actuation modes.

  18. Wired Widgets: Agile Visualization for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Gerschefske, K.; Witmer, J.

    2012-09-01

    Continued advancement in sensors and analysis techniques have resulted in a wealth of Space Situational Awareness (SSA) data, made available via tools and Service Oriented Architectures (SOA) such as those in the Joint Space Operations Center Mission Systems (JMS) environment. Current visualization software cannot quickly adapt to rapidly changing missions and data, preventing operators and analysts from performing their jobs effectively. The value of this wealth of SSA data is not fully realized, as the operators' existing software is not built with the flexibility to consume new or changing sources of data or to rapidly customize their visualization as the mission evolves. While tools like the JMS user-defined operational picture (UDOP) have begun to fill this gap, this paper presents a further evolution, leveraging Web 2.0 technologies for maximum agility. We demonstrate a flexible Web widget framework with inter-widget data sharing, publish-subscribe eventing, and an API providing the basis for consumption of new data sources and adaptable visualization. Wired Widgets offers cross-portal widgets along with a widget communication framework and development toolkit for rapid new widget development, giving operators the ability to answer relevant questions as the mission evolves. Wired Widgets has been applied in a number of dynamic mission domains including disaster response, combat operations, and noncombatant evacuation scenarios. The variety of applications demonstrate that Wired Widgets provides a flexible, data driven solution for visualization in changing environments. In this paper, we show how, deployed in the Ozone Widget Framework portal environment, Wired Widgets can provide an agile, web-based visualization to support the SSA mission. Furthermore, we discuss how the tenets of agile visualization can generally be applied to the SSA problem space to provide operators flexibility, potentially informing future acquisition and system development.

  19. Hand-held optoacoustic probe for three-dimensional imaging of human morphology and function

    NASA Astrophysics Data System (ADS)

    Deán-Ben, X. Luís.; Razansky, Daniel

    2014-03-01

    We report on a hand-held imaging probe for real-time optoacoustic visualization of deep tissues in three dimensions. The proposed solution incorporates a two-dimensional array of ultrasonic sensors densely distributed on a spherical surface, whereas illumination is performed coaxially through a cylindrical cavity in the array. Visualization of three-dimensional tomographic data at a frame rate of 10 images per second is enabled by parallel recording of 256 time-resolved signals for each individual laser pulse along with a highly efficient GPUbased real-time reconstruction. A liquid coupling medium (water), enclosed in a transparent membrane, is used to guarantee transmission of the optoacoustically generated waves to the ultrasonic detectors. Excitation at multiple wavelengths further allows imaging spectrally distinctive tissue chromophores such as oxygenated and deoxygenated haemoglobin. The performance is showcased by video-rate tracking of deep tissue vasculature and three-dimensional measurements of blood oxygenenation in a healthy human volunteer. The flexibility provided by the hand-held hardware design, combined with the real-time operation, makes the developed platform highly usable for both small animal research and clinical imaging in multiple indications, including cancer, inflammation, skin and cardiovascular diseases, diagnostics of lymphatic system and breast

  20. Delineating the Neural Signatures of Tracking Spatial Position and Working Memory during Attentive Tracking

    PubMed Central

    Drew, Trafton; Horowitz, Todd S.; Wolfe, Jeremy M.; Vogel, Edward K.

    2015-01-01

    In the attentive tracking task, observers track multiple objects as they move independently and unpredictably among visually identical distractors. Although a number of models of attentive tracking implicate visual working memory as the mechanism responsible for representing target locations, no study has ever directly compared the neural mechanisms of the two tasks. In the current set of experiments, we used electrophysiological recordings to delineate similarities and differences between the neural processing involved in working memory and attentive tracking. We found that the contralateral electrophysiological response to the two tasks was similarly sensitive to the number of items attended in both tasks but that there was also a unique contralateral negativity related to the process of monitoring target position during tracking. This signal was absent for periods of time during tracking tasks when objects briefly stopped moving. These results provide evidence that, during attentive tracking, the process of tracking target locations elicits an electrophysiological response that is distinct and dissociable from neural measures of the number of items being attended. PMID:21228175

  1. Flexible Visual Processing in Young Adults with Autism: The Effects of Implicit Learning on a Global-Local Task

    ERIC Educational Resources Information Center

    Hayward, Dana A.; Shore, David I.; Ristic, Jelena; Kovshoff, Hanna; Iarocci, Grace; Mottron, Laurent; Burack, Jacob A.

    2012-01-01

    We utilized a hierarchical figures task to determine the default level of perceptual processing and the flexibility of visual processing in a group of high-functioning young adults with autism (n = 12) and a typically developing young adults, matched by chronological age and IQ (n = 12). In one task, participants attended to one level of the…

  2. Semi-Supervised Tensor-Based Graph Embedding Learning and Its Application to Visual Discriminant Tracking.

    PubMed

    Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen

    2017-01-01

    An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.

  3. Fast Track Option: An Accelerated Associate's Degree Program.

    ERIC Educational Resources Information Center

    Price, J. Randall

    1998-01-01

    Alternative instructional delivery options such as self-paced and flexible enrollment courses are designed to increase enrollment, promote retention, and encourage student success without lowering academic standards. The Fast Track Associate's Degree Program, developed by a team of faculty, staff, and administrators at Richland Community College,…

  4. Attentive Tracking Disrupts Feature Binding in Visual Working Memory

    PubMed Central

    Fougnie, Daryl; Marois, René

    2009-01-01

    One of the most influential theories in visual cognition proposes that attention is necessary to bind different visual features into coherent object percepts (Treisman & Gelade, 1980). While considerable evidence supports a role for attention in perceptual feature binding, whether attention plays a similar function in visual working memory (VWM) remains controversial. To test the attentional requirements of VWM feature binding, here we gave participants an attention-demanding multiple object tracking task during the retention interval of a VWM task. Results show that the tracking task disrupted memory for color-shape conjunctions above and beyond any impairment to working memory for object features, and that this impairment was larger when the VWM stimuli were presented at different spatial locations. These results demonstrate that the role of visuospatial attention in feature binding is not unique to perception, but extends to the working memory of these perceptual representations as well. PMID:19609460

  5. Visual perception system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  6. Flexibility in Statistical Word Segmentation: Finding Words in Foreign Speech

    ERIC Educational Resources Information Center

    Graf Estes, Katharine; Gluck, Stephanie Chen-Wu; Bastos, Carolina

    2015-01-01

    The present experiments investigated the flexibility of statistical word segmentation. There is ample evidence that infants can use statistical cues (e.g., syllable transitional probabilities) to segment fluent speech. However, it is unclear how effectively infants track these patterns in unfamiliar phonological systems. We examined whether…

  7. 77 FR 43740 - Changes to the In-Bond Process; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-26

    ... various changes to the in-bond regulations to enhance CBP's ability to regulate and track in-bond...-bond merchandise is exported. In that document, CBP published a summary of its analysis under the Regulatory Flexibility Act and stated that the complete Initial Regulatory Flexibility Analysis (IRFA) was...

  8. Multimodal microfluidic platform for controlled culture and analysis of unicellular organisms.

    PubMed

    Geng, Tao; Smallwood, Chuck R; Bredeweg, Erin L; Pomraning, Kyle R; Plymale, Andrew E; Baker, Scott E; Evans, James E; Kelly, Ryan T

    2017-09-01

    Modern live-cell imaging approaches permit real-time visualization of biological processes, yet limitations exist for unicellular organism isolation, culturing, and long-term imaging that preclude fully understanding how cells sense and respond to environmental perturbations and the link between single-cell variability and whole-population dynamics. Here, we present a microfluidic platform that provides fine control over the local environment with the capacity to replace media components at any experimental time point, and provides both perfused and compartmentalized cultivation conditions depending on the valve configuration. The functionality and flexibility of the platform were validated using both bacteria and yeast having different sizes, motility, and growth media. The demonstrated ability to track the growth and dynamics of both motile and non-motile prokaryotic and eukaryotic organisms emphasizes the versatility of the devices, which should enable studies in bioenergy and environmental research.

  9. Neuropsychological functioning in older people with type 2 diabetes: the effect of controlling for confounding factors.

    PubMed

    Asimakopoulou, K G; Hampson, S E; Morrish, N J

    2002-04-01

    Neuropsychological functioning was examined in a group of 33 older (mean age 62.40 +/- 9.62 years) people with Type 2 diabetes (Group 1) and 33 non-diabetic participants matched with Group 1 on age, sex, premorbid intelligence and presence of hypertension and cardio/cerebrovascular conditions (Group 2). Data statistically corrected for confounding factors obtained from the diabetic group were compared with the matched control group. The results suggested small cognitive deficits in diabetic people's verbal memory and mental flexibility (Logical Memory A and SS7). No differences were seen between the two samples in simple and complex visuomotor attention, sustained complex visual attention, attention efficiency, mental double tracking, implicit memory, and self-reported memory problems. These findings indicate minimal cognitive impairment in relatively uncomplicated Type 2 diabetes and demonstrate the importance of control and matching for confounding factors.

  10. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    PubMed Central

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  11. Preliminary Evaluation of a Commercial 360 Multi-Camera Rig for Photogrammetric Purposes

    NASA Astrophysics Data System (ADS)

    Teppati Losè, L.; Chiabrando, F.; Spanò, A.

    2018-05-01

    The research presented in this paper is focused on a preliminary evaluation of a 360 multi-camera rig: the possibilities to use the images acquired by the system in a photogrammetric workflow and for the creation of spherical images are investigated and different tests and analyses are reported. Particular attention is dedicated to different operative approaches for the estimation of the interior orientation parameters of the cameras, both from an operative and theoretical point of view. The consistency of the six cameras that compose the 360 system was in depth analysed adopting a self-calibration approach in a commercial photogrammetric software solution. A 3D calibration field was projected and created, and several topographic measurements were performed in order to have a set of control points to enhance and control the photogrammetric process. The influence of the interior parameters of the six cameras were analyse both in the different phases of the photogrammetric workflow (reprojection errors on the single tie point, dense cloud generation, geometrical description of the surveyed object, etc.), both in the stitching of the different images into a single spherical panorama (some consideration on the influence of the camera parameters on the overall quality of the spherical image are reported also in these section).

  12. Visual Processing of Faces in Individuals with Fragile X Syndrome: An Eye Tracking Study

    ERIC Educational Resources Information Center

    Farzin, Faraz; Rivera, Susan M.; Hessl, David

    2009-01-01

    Gaze avoidance is a hallmark behavioral feature of fragile X syndrome (FXS), but little is known about whether abnormalities in the visual processing of faces, including disrupted autonomic reactivity, may underlie this behavior. Eye tracking was used to record fixations and pupil diameter while adolescents and young adults with FXS and sex- and…

  13. The Influences of Static and Interactive Dynamic Facial Stimuli on Visual Strategies in Persons with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Several studies, using eye tracking methodology, suggest that different visual strategies in persons with autism spectrum conditions, compared with controls, are applied when viewing facial stimuli. Most eye tracking studies are, however, made in laboratory settings with either static (photos) or non-interactive dynamic stimuli, such as video…

  14. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  15. Identifying elemental genomic track types and representing them uniformly

    PubMed Central

    2011-01-01

    Background With the recent advances and availability of various high-throughput sequencing technologies, data on many molecular aspects, such as gene regulation, chromatin dynamics, and the three-dimensional organization of DNA, are rapidly being generated in an increasing number of laboratories. The variation in biological context, and the increasingly dispersed mode of data generation, imply a need for precise, interoperable and flexible representations of genomic features through formats that are easy to parse. A host of alternative formats are currently available and in use, complicating analysis and tool development. The issue of whether and how the multitude of formats reflects varying underlying characteristics of data has to our knowledge not previously been systematically treated. Results We here identify intrinsic distinctions between genomic features, and argue that the distinctions imply that a certain variation in the representation of features as genomic tracks is warranted. Four core informational properties of tracks are discussed: gaps, lengths, values and interconnections. From this we delineate fifteen generic track types. Based on the track type distinctions, we characterize major existing representational formats and find that the track types are not adequately supported by any single format. We also find, in contrast to the XML formats, that none of the existing tabular formats are conveniently extendable to support all track types. We thus propose two unified formats for track data, an improved XML format, BioXSD 1.1, and a new tabular format, GTrack 1.0. Conclusions The defined track types are shown to capture relevant distinctions between genomic annotation tracks, resulting in varying representational needs and analysis possibilities. The proposed formats, GTrack 1.0 and BioXSD 1.1, cater to the identified track distinctions and emphasize preciseness, flexibility and parsing convenience. PMID:22208806

  16. SacLab: A toolbox for saccade analysis to increase usability of eye tracking systems in clinical ophthalmology practice.

    PubMed

    Cercenelli, Laura; Tiberi, Guido; Corazza, Ivan; Giannaccare, Giuseppe; Fresina, Michela; Marcelli, Emanuela

    2017-01-01

    Many open source software packages have been recently developed to expand the usability of eye tracking systems to study oculomotor behavior, but none of these is specifically designed to encompass all the main functions required for creating eye tracking tests and for providing the automatic analysis of saccadic eye movements. The aim of this study is to introduce SacLab, an intuitive, freely-available MATLAB toolbox based on Graphical User Interfaces (GUIs) that we have developed to increase the usability of the ViewPoint EyeTracker (Arrington Research, Scottsdale, AZ, USA) in clinical ophthalmology practice. SacLab consists of four processing modules that enable the user to easily create visual stimuli tests (Test Designer), record saccadic eye movements (Data Recorder), analyze the recorded data to automatically extract saccadic parameters of clinical interest (Data Analyzer) and provide an aggregate analysis from multiple eye movements recordings (Saccade Analyzer), without requiring any programming effort by the user. A demo application of SacLab to carry out eye tracking tests for the analysis of horizontal saccades was reported. We tested the usability of SacLab toolbox with three ophthalmologists who had no programming experience; the ophthalmologists were briefly trained in the use of SacLab GUIs and were asked to perform the demo application. The toolbox gained an enthusiastic feedback from all the clinicians in terms of intuitiveness, ease of use and flexibility. Test creation and data processing were accomplished in 52±21s and 46±19s, respectively, using the SacLab GUIs. SacLab may represent a useful tool to ease the application of the ViewPoint EyeTracker system in clinical routine in ophthalmology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Visual speech influences speech perception immediately but not automatically.

    PubMed

    Mitterer, Holger; Reinisch, Eva

    2017-02-01

    Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.

  18. Visualization and Tracking of Parallel CFD Simulations

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kremenetsky, Mark

    1995-01-01

    We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.

  19. A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements.

    PubMed

    Hauswald, Anne; Lithari, Chrysa; Collignon, Olivier; Leonardelli, Elisa; Weisz, Nathan

    2018-05-07

    Successful lip-reading requires a mapping from visual to phonological information [1]. Recently, visual and motor cortices have been implicated in tracking lip movements (e.g., [2]). It remains unclear, however, whether visuo-phonological mapping occurs already at the level of the visual cortex-that is, whether this structure tracks the acoustic signal in a functionally relevant manner. To elucidate this, we investigated how the cortex tracks (i.e., entrains to) absent acoustic speech signals carried by silent lip movements. Crucially, we contrasted the entrainment to unheard forward (intelligible) and backward (unintelligible) acoustic speech. We observed that the visual cortex exhibited stronger entrainment to the unheard forward acoustic speech envelope compared to the unheard backward acoustic speech envelope. Supporting the notion of a visuo-phonological mapping process, this forward-backward difference of occipital entrainment was not present for actually observed lip movements. Importantly, the respective occipital region received more top-down input, especially from left premotor, primary motor, and somatosensory regions and, to a lesser extent, also from posterior temporal cortex. Strikingly, across participants, the extent of top-down modulation of the visual cortex stemming from these regions partially correlated with the strength of entrainment to absent acoustic forward speech envelope, but not to present forward lip movements. Our findings demonstrate that a distributed cortical network, including key dorsal stream auditory regions [3-5], influences how the visual cortex shows sensitivity to the intelligibility of speech while tracking silent lip movements. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. The Role of Visual Working Memory in Attentive Tracking of Unique Objects

    ERIC Educational Resources Information Center

    Makovski, Tal; Jiang, Yuhong V.

    2009-01-01

    When tracking moving objects in space humans usually attend to the objects' spatial locations and update this information over time. To what extent do surface features assist attentive tracking? In this study we asked participants to track identical or uniquely colored objects. Tracking was enhanced when objects were unique in color. The benefit…

  1. Controlling the spotlight of attention: visual span size and flexibility in schizophrenia.

    PubMed

    Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M

    2011-10-01

    The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. A shared, flexible neural map architecture reflects capacity limits in both visual short-term memory and enumeration.

    PubMed

    Knops, André; Piazza, Manuela; Sengupta, Rakesh; Eger, Evelyn; Melcher, David

    2014-07-23

    Human cognition is characterized by severe capacity limits: we can accurately track, enumerate, or hold in mind only a small number of items at a time. It remains debated whether capacity limitations across tasks are determined by a common system. Here we measure brain activation of adult subjects performing either a visual short-term memory (vSTM) task consisting of holding in mind precise information about the orientation and position of a variable number of items, or an enumeration task consisting of assessing the number of items in those sets. We show that task-specific capacity limits (three to four items in enumeration and two to three in vSTM) are neurally reflected in the activity of the posterior parietal cortex (PPC): an identical set of voxels in this region, commonly activated during the two tasks, changed its overall response profile reflecting task-specific capacity limitations. These results, replicated in a second experiment, were further supported by multivariate pattern analysis in which we could decode the number of items presented over a larger range during enumeration than during vSTM. Finally, we simulated our results with a computational model of PPC using a saliency map architecture in which the level of mutual inhibition between nodes gives rise to capacity limitations and reflects the task-dependent precision with which objects need to be encoded (high precision for vSTM, lower precision for enumeration). Together, our work supports the existence of a common, flexible system underlying capacity limits across tasks in PPC that may take the form of a saliency map. Copyright © 2014 the authors 0270-6474/14/349857-10$15.00/0.

  3. Target recognitions in multiple-camera closed-circuit television using color constancy

    NASA Astrophysics Data System (ADS)

    Soori, Umair; Yuen, Peter; Han, Ji Wen; Ibrahim, Izzati; Chen, Wentao; Hong, Kan; Merfort, Christian; James, David; Richardson, Mark

    2013-04-01

    People tracking in crowded scenes from closed-circuit television (CCTV) footage has been a popular and challenging task in computer vision. Due to the limited spatial resolution in the CCTV footage, the color of people's dress may offer an alternative feature for their recognition and tracking. However, there are many factors, such as variable illumination conditions, viewing angles, and camera calibration, that may induce illusive modification of intrinsic color signatures of the target. Our objective is to recognize and track targets in multiple camera views using color as the detection feature, and to understand if a color constancy (CC) approach may help to reduce these color illusions due to illumination and camera artifacts and thereby improve target recognition performance. We have tested a number of CC algorithms using various color descriptors to assess the efficiency of target recognition from a real multicamera Imagery Library for Intelligent Detection Systems (i-LIDS) data set. Various classifiers have been used for target detection, and the figure of merit to assess the efficiency of target recognition is achieved through the area under the receiver operating characteristics (AUROC). We have proposed two modifications of luminance-based CC algorithms: one with a color transfer mechanism and the other using a pixel-wise sigmoid function for an adaptive dynamic range compression, a method termed enhanced luminance reflectance CC (ELRCC). We found that both algorithms improve the efficiency of target recognitions substantially better than that of the raw data without CC treatment, and in some cases the ELRCC improves target tracking by over 100% within the AUROC assessment metric. The performance of the ELRCC has been assessed over 10 selected targets from three different camera views of the i-LIDS footage, and the averaged target recognition efficiency over all these targets is found to be improved by about 54% in AUROC after the data are processed by the proposed ELRCC algorithm. This amount of improvement represents a reduction of probability of false alarm by about a factor of 5 at the probability of detection of 0.5. Our study concerns mainly the detection of colored targets; and issues for the recognition of white or gray targets will be addressed in a forthcoming study.

  4. Visual arts training is linked to flexible attention to local and global levels of visual stimuli.

    PubMed

    Chamberlain, Rebecca; Wagemans, Johan

    2015-10-01

    Observational drawing skill has been shown to be associated with the ability to focus on local visual details. It is unclear whether superior performance in local processing is indicative of the ability to attend to, and flexibly switch between, local and global levels of visual stimuli. It is also unknown whether these attentional enhancements remain specific to observational drawing skill or are a product of a wide range of artistic activities. The current study aimed to address these questions by testing if flexible visual processing predicts artistic group membership and observational drawing skill in a sample of first-year bachelor's degree art students (n=23) and non-art students (n=23). A pattern of local and global visual processing enhancements was found in relation to artistic group membership and drawing skill, with local processing ability found to be specifically related to individual differences in drawing skill. Enhanced global processing and more fluent switching between local and global levels of hierarchical stimuli predicted both drawing skill and artistic group membership, suggesting that these are beneficial attentional mechanisms for art-making in a range of domains. These findings support a top-down attentional model of artistic expertise and shed light on the domain specific and domain-general attentional enhancements induced by proficiency in the visual arts. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  6. Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking

    PubMed Central

    Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng

    2017-01-01

    Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors’ numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms. PMID:28481243

  7. Multiple feature fusion via covariance matrix for visual tracking

    NASA Astrophysics Data System (ADS)

    Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui

    2018-04-01

    Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.

  8. Nanoscale measurements of proton tracks using fluorescent nuclear track detectors

    PubMed Central

    Sawakuchi, Gabriel O.; Ferreira, Felisberto A.; McFadden, Conor H.; Hallacy, Timothy M.; Granville, Dal A.; Sahoo, Narayan; Akselrod, Mark S.

    2016-01-01

    Purpose: The authors describe a method in which fluorescence nuclear track detectors (FNTDs), novel track detectors with nanoscale spatial resolution, are used to determine the linear energy transfer (LET) of individual proton tracks from proton therapy beams by allowing visualization and 3D reconstruction of such tracks. Methods: FNTDs were exposed to proton therapy beams with nominal energies ranging from 100 to 250 MeV. Proton track images were then recorded by confocal microscopy of the FNTDs. Proton tracks in the FNTD images were fit by using a Gaussian function to extract fluorescence amplitudes. Histograms of fluorescence amplitudes were then compared with LET spectra. Results: The authors successfully used FNTDs to register individual proton tracks from high-energy proton therapy beams, allowing reconstruction of 3D images of proton tracks along with delta rays. The track amplitudes from FNTDs could be used to parameterize LET spectra, allowing the LET of individual proton tracks from therapeutic proton beams to be determined. Conclusions: FNTDs can be used to directly visualize proton tracks and their delta rays at the nanoscale level. Because the track intensities in the FNTDs correlate with LET, they could be used further to measure LET of individual proton tracks. This method may be useful for measuring nanoscale radiation quantities and for measuring the LET of individual proton tracks in radiation biology experiments. PMID:27147359

  9. Nanoscale measurements of proton tracks using fluorescent nuclear track detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawakuchi, Gabriel O., E-mail: gsawakuchi@mdanderson.org; Sahoo, Narayan; Ferreira, Felisberto A.

    Purpose: The authors describe a method in which fluorescence nuclear track detectors (FNTDs), novel track detectors with nanoscale spatial resolution, are used to determine the linear energy transfer (LET) of individual proton tracks from proton therapy beams by allowing visualization and 3D reconstruction of such tracks. Methods: FNTDs were exposed to proton therapy beams with nominal energies ranging from 100 to 250 MeV. Proton track images were then recorded by confocal microscopy of the FNTDs. Proton tracks in the FNTD images were fit by using a Gaussian function to extract fluorescence amplitudes. Histograms of fluorescence amplitudes were then compared withmore » LET spectra. Results: The authors successfully used FNTDs to register individual proton tracks from high-energy proton therapy beams, allowing reconstruction of 3D images of proton tracks along with delta rays. The track amplitudes from FNTDs could be used to parameterize LET spectra, allowing the LET of individual proton tracks from therapeutic proton beams to be determined. Conclusions: FNTDs can be used to directly visualize proton tracks and their delta rays at the nanoscale level. Because the track intensities in the FNTDs correlate with LET, they could be used further to measure LET of individual proton tracks. This method may be useful for measuring nanoscale radiation quantities and for measuring the LET of individual proton tracks in radiation biology experiments.« less

  10. Distributed visualization framework architecture

    NASA Astrophysics Data System (ADS)

    Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger

    2010-01-01

    An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this interface. One of the main features is an interactive shader designer. This allows rapid prototyping of new visualization renderings that are shader-based and greatly accelerates the development and debug cycle.

  11. The effects of tDCS upon sustained visual attention are dependent on cognitive load.

    PubMed

    Roe, James M; Nesheim, Mathias; Mathiesen, Nina C; Moberget, Torgeir; Alnæs, Dag; Sneve, Markus H

    2016-01-08

    Transcranial Direct Current Stimulation (tDCS) modulates the excitability of neuronal responses and consequently can affect performance on a variety of cognitive tasks. However, the interaction between cognitive load and the effects of tDCS is currently not well-understood. We recorded the performance accuracy of participants on a bilateral multiple object tracking task while undergoing bilateral stimulation assumed to enhance (anodal) and decrease (cathodal) neuronal excitability. Stimulation was applied to the posterior parietal cortex (PPC), a region inferred to be at the centre of an attentional tracking network that shows load-dependent activation. 34 participants underwent three separate stimulation conditions across three days. Each subject received (1) left cathodal / right anodal PPC tDCS, (2) left anodal / right cathodal PPC tDCS, and (3) sham tDCS. The number of targets-to-be-tracked was also manipulated, giving a low (one target per visual field), medium (two targets per visual field) or high (three targets per visual field) tracking load condition. It was found that tracking performance at high attentional loads was significantly reduced in both stimulation conditions relative to sham, and this was apparent in both visual fields, regardless of the direction of polarity upon the brain's hemispheres. We interpret this as an interaction between cognitive load and tDCS, and suggest that tDCS may degrade attentional performance when cognitive networks become overtaxed and unable to compensate as a result. Systematically varying cognitive load may therefore be a fruitful direction to elucidate the effects of tDCS upon cognitive functions. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Perceptual training yields rapid improvements in visually impaired youth.

    PubMed

    Nyquist, Jeffrey B; Lappin, Joseph S; Zhang, Ruyuan; Tadin, Duje

    2016-11-30

    Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events.

  13. D Animation Reconstruction from Multi-Camera Coordinates Transformation

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Rau, J. Y.; Chou, C. M.

    2016-06-01

    Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  14. Expertise Differences in the Comprehension of Visualizations: A Meta-Analysis of Eye-Tracking Research in Professional Domains

    ERIC Educational Resources Information Center

    Gegenfurtner, Andreas; Lehtinen, Erno; Saljo, Roger

    2011-01-01

    This meta-analysis integrates 296 effect sizes reported in eye-tracking research on expertise differences in the comprehension of visualizations. Three theories were evaluated: Ericsson and Kintsch's ("Psychol Rev" 102:211-245, 1995) theory of long-term working memory, Haider and Frensch's ("J Exp Psychol Learn Mem Cognit" 25:172-190, 1999)…

  15. TrAVis to Enhance Online Tutoring and Learning Activities: Real-Time Visualization of Students Tracking Data

    ERIC Educational Resources Information Center

    May, Madeth; George, Sebastien; Prevot, Patrick

    2011-01-01

    Purpose: This paper presents a part of our research work that places an emphasis on Tracking Data Analysis and Visualization (TrAVis) tools, a web-based system, designed to enhance online tutoring and learning activities, supported by computer-mediated communication (CMC) tools. TrAVis is particularly dedicated to assist both tutors and students…

  16. School Type Differences in Attainment of Developmental Goals in Students with Visual Impairment and Sighted Peers

    ERIC Educational Resources Information Center

    Pfeiffer, Jens P.; Pinquart, Martin; Munchow, Hannes

    2012-01-01

    The present study analyzed whether the perceived attainment of developmental goals differs by school type and between adolescents with visual impairments and sighted peers. We created a matched-pair design with 98 German students from the middle school track and 98 from the highest school track; half of the members of each group were visually…

  17. Active eye-tracking improves LASIK results.

    PubMed

    Lee, Yuan-Chieh

    2007-06-01

    To study the advantage of active eye-tracking for photorefractive surgery. In a prospective, double-masked study, LASIK for myopia and myopic astigmatism was performed in 50 patients using the ALLEGRETTO WAVE version 1007. All patients received LASIK with full comprehension of the importance of fixation during the procedure. All surgical procedures were performed by a single surgeon. The eye-tracker was turned off in one group (n = 25) and kept on in another group (n = 25). Preoperatively and 3 months postoperatively, patients underwent a standard ophthalmic examination, which included comeal topography. In the patients treated with the eye-tracker off, all had uncorrected visual acuity (UCVA) of > or = 20/40 and 64% had > or = 20/20. Compared with the patients treated with the eye-tracker on, they had higher residual cylindrical astigmatism (P < .05). Those treated with the eye-tracker on achieved better UCVA and best spectacle-corrected visual acuity (P < .05). Spherical error and potential visual acuity (TMS-II) were not significantly different between the groups. The flying-spot system can achieve a fair result without active eye-tracking, but active eye-tracking helps improve the visual outcome and reduces postoperative cylindrical astigmatism.

  18. Adaptive particle filter for robust visual tracking

    NASA Astrophysics Data System (ADS)

    Dai, Jianghua; Yu, Shengsheng; Sun, Weiping; Chen, Xiaoping; Xiang, Jinhai

    2009-10-01

    Object tracking plays a key role in the field of computer vision. Particle filter has been widely used for visual tracking under nonlinear and/or non-Gaussian circumstances. In particle filter, the state transition model for predicting the next location of tracked object assumes the object motion is invariable, which cannot well approximate the varying dynamics of the motion changes. In addition, the state estimate calculated by the mean of all the weighted particles is coarse or inaccurate due to various noise disturbances. Both these two factors may degrade tracking performance greatly. In this work, an adaptive particle filter (APF) with a velocity-updating based transition model (VTM) and an adaptive state estimate approach (ASEA) is proposed to improve object tracking. In APF, the motion velocity embedded into the state transition model is updated continuously by a recursive equation, and the state estimate is obtained adaptively according to the state posterior distribution. The experiment results show that the APF can increase the tracking accuracy and efficiency in complex environments.

  19. Coating and functionalization of high density ion track structures by atomic layer deposition

    NASA Astrophysics Data System (ADS)

    Mättö, Laura; Szilágyi, Imre M.; Laitinen, Mikko; Ritala, Mikko; Leskelä, Markku; Sajavaara, Timo

    2016-10-01

    In this study flexible TiO2 coated porous Kapton membranes are presented having electron multiplication properties. 800 nm crossing pores were fabricated into 50 μm thick Kapton membranes using ion track technology and chemical etching. Consecutively, 50 nm TiO2 films were deposited into the pores of the Kapton membranes by atomic layer deposition using Ti(iOPr)4 and water as precursors at 250 °C. The TiO2 films and coated membranes were studied by scanning electron microscopy (SEM), X-ray diffraction (XRD) and X-ray reflectometry (XRR). Au metal electrode fabrication onto both sides of the coated foils was achieved by electron beam evaporation. The electron multipliers were obtained by joining two coated membranes separated by a conductive spacer. The results show that electron multiplication can be achieved using ALD-coated flexible ion track polymer foils.

  20. Novel prescribed performance neural control of a flexible air-breathing hypersonic vehicle with unknown initial errors.

    PubMed

    Bu, Xiangwei; Wu, Xiaoyan; Zhu, Fujing; Huang, Jiaqi; Ma, Zhen; Zhang, Rui

    2015-11-01

    A novel prescribed performance neural controller with unknown initial errors is addressed for the longitudinal dynamic model of a flexible air-breathing hypersonic vehicle (FAHV) subject to parametric uncertainties. Different from traditional prescribed performance control (PPC) requiring that the initial errors have to be known accurately, this paper investigates the tracking control without accurate initial errors via exploiting a new performance function. A combined neural back-stepping and minimal learning parameter (MLP) technology is employed for exploring a prescribed performance controller that provides robust tracking of velocity and altitude reference trajectories. The highlight is that the transient performance of velocity and altitude tracking errors is satisfactory and the computational load of neural approximation is low. Finally, numerical simulation results from a nonlinear FAHV model demonstrate the efficacy of the proposed strategy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. The spacecraft control laboratory experiment optical attitude measurement system

    NASA Technical Reports Server (NTRS)

    Welch, Sharon S.; Montgomery, Raymond C.; Barsky, Michael F.

    1991-01-01

    A stereo camera tracking system was developed to provide a near real-time measure of the position and attitude of the Spacecraft COntrol Laboratory Experiment (SCOLE). The SCOLE is a mockup of the shuttle-like vehicle with an attached flexible mast and (simulated) antenna, and was designed to provide a laboratory environment for the verification and testing of control laws for large flexible spacecraft. Actuators and sensors located on the shuttle and antenna sense the states of the spacecraft and allow the position and attitude to be controlled. The stereo camera tracking system which was developed consists of two position sensitive detector cameras which sense the locations of small infrared LEDs attached to the surface of the shuttle. Information on shuttle position and attitude is provided in six degrees-of-freedom. The design of this optical system, calibration, and tracking algorithm are described. The performance of the system is evaluated for yaw only.

  2. Evaluation of Kapton pyrolysis, arc tracking, and arc propagation on the Space Station Freedom (SSF) solar array Flexible Current Carrier (FCC)

    NASA Technical Reports Server (NTRS)

    Stueber, Thomas J.

    1991-01-01

    Recent studies involving the use of polyimide Kapton coated wires indicate that if a momentary electrical short circuit occurs between two wires, sufficient heating of the Kapton can occur to thermally char (pyrolyze) the Kapton. Such charred Kapton has sufficient electrical conductivity to create an arc which tracks down the wires and possibly propagates to adjoining wires. These studies prompted an investigation to ascertain the likelihood of the Kapton pyrolysis, arc tracking and propagation phenomena, and the magnitude of destruction conceivably inflicted on Space Station Freedom's (SSF) Flexible Current Carrier (FCC) for the photovoltaic array. The geometric layout of the FCC, having a planar-type orientation as opposed to bundles, may reduce the probability of sustaining an arc. An experimental investigation was conducted to simulate conditions under which an arc can occur on the FCC of SSF, and the consequences of arc initiation.

  3. Evaluation of Kapton pyrolysis, arc tracking, and arc propagation on the Space Station Freedom (SSF) solar array flexible current carrier (FCC)

    NASA Technical Reports Server (NTRS)

    Stueber, Thomas J.

    1991-01-01

    Recent studies involving the use of polyimide Kapton coated wires indicate that if a momentary electrical short circuit occurs between two wires, sufficient heating of the Kapton can occur to themally chlar (pyrolyze) the Kapton. Such charred Kapton has sufficient electricxl conductivity to create an arc which tracks down the wires and possibly propagates to adjoining wires. These studies prompted an invetigation to ascertain the likelihood of Kapton pyrolysis, arc tracking and propagation phenomena, and the magnitude of destruction conceivably inflicted on Space Station Freedom's (SSF's) Flexible Current Carrier (FCC) for the photovoltaic array. The geometric layout of the FCC, having a planar-type orientation as opposed to bundles, may reduce the probability of sustaining an arc. An experimental investigation was conducted to simulate conditions under which an arc can occur on the FCC of the SSF, and the consequences of arc initiation.

  4. Optimal Reference Strain Structure for Studying Dynamic Responses of Flexible Rockets

    NASA Technical Reports Server (NTRS)

    Tsushima, Natsuki; Su, Weihua; Wolf, Michael G.; Griffin, Edwin D.; Dumoulin, Marie P.

    2017-01-01

    In the proposed paper, the optimal design of reference strain structures (RSS) will be performed targeting for the accurate observation of the dynamic bending and torsion deformation of a flexible rocket. It will provide the detailed description of the finite-element (FE) model of a notional flexible rocket created in MSC.Patran. The RSS will be attached longitudinally along the side of the rocket and to track the deformation of the thin-walled structure under external loads. An integrated surrogate-based multi-objective optimization approach will be developed to find the optimal design of the RSS using the FE model. The Kriging method will be used to construct the surrogate model. For the data sampling and the performance evaluation, static/transient analyses will be performed with MSC.Natran/Patran. The multi-objective optimization will be solved with NSGA-II to minimize the difference between the strains of the launch vehicle and RSS. Finally, the performance of the optimal RSS will be evaluated by checking its strain-tracking capability in different numerical simulations of the flexible rocket.

  5. STAR: an integrated solution to management and visualization of sequencing data.

    PubMed

    Wang, Tao; Liu, Jie; Shen, Li; Tonti-Filippini, Julian; Zhu, Yun; Jia, Haiyang; Lister, Ryan; Whitaker, John W; Ecker, Joseph R; Millar, A Harvey; Ren, Bing; Wang, Wei

    2013-12-15

    Easily visualization of complex data features is a necessary step to conduct studies on next-generation sequencing (NGS) data. We developed STAR, an integrated web application that enables online management, visualization and track-based analysis of NGS data. STAR is a multilayer web service system. On the client side, STAR leverages JavaScript, HTML5 Canvas and asynchronous communications to deliver a smoothly scrolling desktop-like graphical user interface with a suite of in-browser analysis tools that range from providing simple track configuration controls to sophisticated feature detection within datasets. On the server side, STAR supports private session state retention via an account management system and provides data management modules that enable collection, visualization and analysis of third-party sequencing data from the public domain with over thousands of tracks hosted to date. Overall, STAR represents a next-generation data exploration solution to match the requirements of NGS data, enabling both intuitive visualization and dynamic analysis of data. STAR browser system is freely available on the web at http://wanglab.ucsd.edu/star/browser and https://github.com/angell1117/STAR-genome-browser.

  6. Parabolic trough solar collector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, J.H.

    1985-01-15

    A parabolic trough solar collector using reflective flexible materials is disclosed. A parabolic cylinder mirror is formed by stretching a flexible reflecting material between two parabolic end formers. The formers are held in place by a spreader bar. The resulting mirror is made to track the sun, focusing the sun's rays on a receiver tube. The ends of the reflective material are attached by glue or other suitable means to attachment straps. The flexible mirror is then attached to the formers. The attachment straps are mounted in brackets and tensioned by tightening associated nuts on the ends of the attachmentmore » straps. This serves both to stretch the flexible material orthogonal to the receiver tube and to hold the flexible material on the formers. The flexible mirror is stretched in the direction of the receiver tube by adjusting tensioning nuts. If materials with matching coefficients of expansion for temperature and humidity have been chosen, for example, aluminum foil for the flexible mirror and aluminum for the spreader bar, the mirror will stay in adjustment through temperature and humidity excursions. With dissimilar materials, e.g., aluminized mylar or other polymeric material and steel, spacers can be replaced with springs to maintain proper adjustment. The spreader bar cross section is chosen to be in the optic shadow of the receiver tube when tracking and not to intercept rays of the sun that would otherwise reach the receiver tube. This invention can also be used to make non-parabolic mirrors for other apparatus and applications.« less

  7. Visualizing the Verbal and Verbalizing the Visual.

    ERIC Educational Resources Information Center

    Braden, Roberts A.

    This paper explores relationships of visual images to verbal elements, beginning with a discussion of visible language as represented by words printed on the page. The visual flexibility inherent in typography is discussed in terms of the appearance of the letters and the denotative and connotative meanings represented by type, typographical…

  8. Electrophysiological evidence for right frontal lobe dominance in spatial visuomotor learning.

    PubMed

    Lang, W; Lang, M; Kornhuber, A; Kornhuber, H H

    1986-02-01

    Slow negative potential shifts were recorded together with the error made in motor performance when two different groups of 14 students tracked visual stimuli with their right hand. Various visuomotor tasks were compared. A tracking task (T) in which subjects had to track the stimulus directly, showed no decrease of error in motor performance during the experiment. In a distorted tracking task (DT) a continuous horizontal distortion of the visual feedback had to be compensated. The additional demands of this task required visuomotor learning. Another learning condition was a mirrored-tracking task (horizontally inverted tracking, hIT), i.e. an elementary function, such as the concept of changing left and right was interposed between perception and action. In addition, subjects performed a no-tracking control task (NT) in which they started the visual stimulus without tracking it. A slow negative potential shift was associated with the visuomotor performance (TP: tracking potential). In the learning tasks (DT and hIT) this negativity was significantly enhanced over the anterior midline and in hIT frontally and precentrally over both hemispheres. Comparing hIT and T for every subject, the enhancement of the tracking potential in hIT was correlated with the success in motor learning in frontomedial and bilaterally in frontolateral recordings (r = 0.81-0.88). However, comparing DT and T, such a correlation was only found in frontomedial and right frontolateral electrodes (r = 0.5-0.61), but not at the left frontolateral electrode. These experiments are consistent with previous findings and give further neurophysiological evidence for frontal lobe activity in visuomotor learning. The hemispherical asymmetry is discussed in respect to hemispherical specialization (right frontal lobe dominance in spatial visuomotor learning).

  9. Changes in lava effusion rate, explosion characteristics and degassing revealed by time-series photogrammetry and feature tracking velocimetry of Santiaguito lava dome

    NASA Astrophysics Data System (ADS)

    Andrews, B. J.; Grocke, S.; Benage, M.

    2016-12-01

    The Santiaguito dome complex, Guatemala, provides a unique opportunity to observe an active lava dome with an array of DSLR and video cameras from the safety of Santa Maria volcano, a vantage point 2500 m away from and 1000 m above the dome. Radio triggered DSLR cameras can collect synchronized images at rates up to 10 frames/minute. Single-camera datasets describe lava dome surface motions and application of Feature-Tracking-Velocimetry (FTV) to the image sequences measures apparent lava flow surface velocities (as projected onto the camera-imaging plane). Multi-camera datasets describe the lava dome surface topography and 3D velocity field; this 4D photogrammetric approach yields georeferenced point clouds and DEMs with specific points or features tracked through time. HD video cameras document explosions and characterize those events as comparatively gas-rich or ash-rich. Comparison of observations collected during January and November 2012 and January 2016 reveals changes in the effusion rate and explosion characteristics at the active Santiaguito dome that suggest a change in shallow degassing behavior. The 2012 lava dome had numerous incandescent regions and surface velocities of 3 m/hr along the southern part of the dome summit where the dome fed a lava flow. The 2012 dome also showed a remarkably periodic (26±6 minute) pattern of inflation and deflation interpreted to reflect gas accumulation and release, with some releases occurring explosively. Video observations show that the explosion plumes were generally ash-poor. In contrast, the January 2016 dome exhibited very limited incandescence, and had reduced surface velocities of <1 m/hr. Explosions occurred infrequently, but were generally longer duration ( e.g. 90-120 s compared to 30 s) and more ash-rich than those in 2012. We suggest that the reduced lava effusion rate in 2016 produced a net increase in the gas accumulation capacity of the shallow magma, and thus larger, less-frequent explosions. These findings indicate that gas permeability may be proportional to magma ascent and strain rate in dome-forming eruptions.

  10. Atrioventricular junction (AVJ) motion tracking: a software tool with ITK/VTK/Qt.

    PubMed

    Pengdong Xiao; Shuang Leng; Xiaodan Zhao; Hua Zou; Ru San Tan; Wong, Philip; Liang Zhong

    2016-08-01

    The quantitative measurement of the Atrioventricular Junction (AVJ) motion is an important index for ventricular functions of one cardiac cycle including systole and diastole. In this paper, a software tool that can conduct AVJ motion tracking from cardiovascular magnetic resonance (CMR) images is presented by using Insight Segmentation and Registration Toolkit (ITK), The Visualization Toolkit (VTK) and Qt. The software tool is written in C++ by using Visual Studio Community 2013 integrated development environment (IDE) containing both an editor and a Microsoft complier. The software package has been successfully implemented. From the software engineering practice, it is concluded that ITK, VTK, and Qt are very handy software systems to implement automatic image analysis functions for CMR images such as quantitative measure of motion by visual tracking.

  11. Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.

    PubMed

    Souto, David; Kerzel, Dirk

    2013-02-06

    Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.

  12. A Neurobehavioral Model of Flexible Spatial Language Behaviors

    PubMed Central

    Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schöner, Gregor

    2012-01-01

    We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes. PMID:21517224

  13. Dissociable Frontal Controls during Visible and Memory-guided Eye-Tracking of Moving Targets

    PubMed Central

    Ding, Jinhong; Powell, David; Jiang, Yang

    2009-01-01

    When tracking visible or occluded moving targets, several frontal regions including the frontal eye fields (FEF), dorsal-lateral prefrontal cortex (DLPFC), and Anterior Cingulate Cortex (ACC) are involved in smooth pursuit eye movements (SPEM). To investigate how these areas play different roles in predicting future locations of moving targets, twelve healthy college students participated in a smooth pursuit task of visual and occluded targets. Their eye movements and brain responses measured by event-related functional MRI were simultaneously recorded. Our results show that different visual cues resulted in time discrepancies between physical and estimated pursuit time only when the moving dot was occluded. Visible phase velocity gain was higher than that of occlusion phase. We found bilateral FEF association with eye-movement whether moving targets are visible or occluded. However, the DLPFC and ACC showed increased activity when tracking and predicting locations of occluded moving targets, and were suppressed during smooth pursuit of visible targets. When visual cues were increasingly available, less activation in the DLPFC and the ACC was observed. Additionally, there was a significant hemisphere effect in DLPFC, where right DLPFC showed significantly increased responses over left when pursuing occluded moving targets. Correlation results revealed that DLPFC, the right DLPFC in particular, communicates more with FEF during tracking of occluded moving targets (from memory). The ACC modulates FEF more during tracking of visible targets (likely related to visual attention). Our results suggest that DLPFC and ACC modulate FEF and cortical networks differentially during visible and memory-guided eye tracking of moving targets. PMID:19434603

  14. Dynamic Geometry Capture with a Multi-View Structured-Light System

    DTIC Science & Technology

    2014-12-19

    funding was never a problem during my studies . One of the best parts of my time at UC Berkeley has been working with colleagues within the Video and...scientific and medical applications such as quantifying improvement in physical therapy and measuring unnatural poses in ergonomic studies . Specifically... cases with limited scene texture. This direct generation of surface geometry provides us with a distinct advantage over multi-camera based systems. For

  15. Multiscale Space-Time Computational Methods for Fluid-Structure Interactions

    DTIC Science & Technology

    2015-09-13

    prescribed fully or partially, is from an actual locust, extracted from high-speed, multi-camera video recordings of the locust in a wind tunnel . We use...With creative methods for coupling the fluid and structure, we can increase the scope and efficiency of the FSI modeling . Multiscale methods, which now...play an important role in computational mathematics, can also increase the accuracy and efficiency of the computer modeling techniques. The main

  16. Human detection and motion analysis at security points

    NASA Astrophysics Data System (ADS)

    Ozer, I. Burak; Lv, Tiehan; Wolf, Wayne H.

    2003-08-01

    This paper presents a real-time video surveillance system for the recognition of specific human activities. Specifically, the proposed automatic motion analysis is used as an on-line alarm system to detect abnormal situations in a campus environment. A smart multi-camera system developed at Princeton University is extended for use in smart environments in which the camera detects the presence of multiple persons as well as their gestures and their interaction in real-time.

  17. Development and evaluation of an instrumented linkage system for total knee surgery.

    PubMed

    Walker, Peter S; Wei, Chih-Shing; Forman, Rachel E; Balicki, M A

    2007-10-01

    The principles and application of total knee surgery using optical tracking have been well demonstrated, but electromagnetic tracking may offer further advantages. We asked whether an instrumented linkage that attaches directly to the bone can maintain the accuracy of the optical and electromagnetic systems but be quicker, more convenient, and less expensive to use. Initial testing using a table-mounted digitizer to navigate a drill guide for placing pins to mount a cutting guide demonstrated the feasibility in terms of access and availability. A first version (called the Mark 1) instrumented linkage designed to fix directly to the bone was constructed and software was written to carry out a complete total knee replacement procedure. The results showed the system largely fulfilled these goals, but some surgeons found that using a visual display for pin placement was difficult and time consuming. As a result, a second version of a linkage system (called the K-Link) was designed to further develop the concept. User-friendly flexible software was developed for facilitating each step quickly and accurately while the placement of cutting guides was facilitated. We concluded that an instrumented linkage system could be a useful and potentially lower-cost option to the current systems for total knee replacement and could possibly have application to other surgical procedures.

  18. Ciliary muscle contraction force and trapezius muscle activity during manual tracking of a moving visual target.

    PubMed

    Domkin, Dmitry; Forsman, Mikael; Richter, Hans O

    2016-06-01

    Previous studies have shown an association of visual demands during near work and increased activity of the trapezius muscle. Those studies were conducted under stationary postural conditions with fixed gaze and artificial visual load. The present study investigated the relationship between ciliary muscle contraction force and trapezius muscle activity across individuals during performance of a natural dynamic motor task under free gaze conditions. Participants (N=11) tracked a moving visual target with a digital pen on a computer screen. Tracking performance, eye refraction and trapezius muscle activity were continuously measured. Ciliary muscle contraction force was computed from eye accommodative response. There was a significant Pearson correlation between ciliary muscle contraction force and trapezius muscle activity on the tracking side (0.78, p<0.01) and passive side (0.64, p<0.05). The study supports the hypothesis that high visual demands, leading to an increased ciliary muscle contraction during continuous eye-hand coordination, may increase trapezius muscle tension and thus contribute to the development of musculoskeletal complaints in the neck-shoulder area. Further experimental studies are required to clarify whether the relationship is valid within each individual or may represent a general personal trait, when individuals with higher eye accommodative response tend to have higher trapezius muscle activity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Non-Tenure-Track Faculty Job Satisfaction and Organizational Sense of Belonging

    ERIC Educational Resources Information Center

    Hudson, Barbara Krall

    2013-01-01

    Non-tenure-track (NTT) faculty are playing an increasingly larger role in the instruction of students in higher education. They provide a flexible workforce with specialized expertise, often prefer to work part-time and frequently teach large introductory courses. Concerns about their treatment and the environment in which they work are often…

  20. 49 CFR 213.365 - Visual inspections.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... made on foot or by riding over the track in a vehicle at a speed that allows the person making the... over track crossings and turnouts, otherwise, the inspection vehicle speed shall be at the sole... unobstructed by any cause and that the second track is not centered more than 30 feet from the track upon which...

  1. 49 CFR 213.365 - Visual inspections.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... made on foot or by riding over the track in a vehicle at a speed that allows the person making the... over track crossings and turnouts, otherwise, the inspection vehicle speed shall be at the sole... unobstructed by any cause and that the second track is not centered more than 30 feet from the track upon which...

  2. A navigation system for flexible endoscopes using abdominal 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Hoffmann, R.; Kaar, M.; Bathia, Amon; Bathia, Amar; Lampret, A.; Birkfellner, W.; Hummel, J.; Figl, M.

    2014-09-01

    A navigation system for flexible endoscopes equipped with ultrasound (US) scan heads is presented. In contrast to similar systems, abdominal 3D-US is used for image fusion of the pre-interventional computed tomography (CT) to the endoscopic US. A 3D-US scan, tracked with an optical tracking system (OTS), is taken pre-operatively together with the CT scan. The CT is calibrated using the OTS, providing the transformation from CT to 3D-US. Immediately before intervention a 3D-US tracked with an electromagnetic tracking system (EMTS) is acquired and registered intra-modal to the preoperative 3D-US. The endoscopic US is calibrated using the EMTS and registered to the pre-operative CT by an intra-modal 3D-US/3D-US registration. Phantom studies showed a registration error for the US to CT registration of 5.1 mm ± 2.8 mm. 3D-US/3D-US registration of patient data gave an error of 4.1 mm compared to 2.8 mm with the phantom. From this we estimate an error on patient experiments of 5.6 mm.

  3. Synchronous response modelling and control of an annular momentum control device

    NASA Astrophysics Data System (ADS)

    Hockney, Richard; Johnson, Bruce G.; Misovec, Kathleen

    1988-08-01

    Research on the synchronous response modelling and control of an advanced Annular Momentun Control Device (AMCD) used to control the attitude of a spacecraft is described. For the flexible rotor AMCD, two sources of synchronous vibrations were identified. One source, which corresponds to the mass unbalance problem of rigid rotors suspended in conventional bearings, is caused by measurement errors of the rotor center of mass position. The other sources of synchronous vibrations is misalignment between the hub and flywheel masses of the AMCD. Four different control algorithms were examined. These were lead-lag compensators that mimic conventional bearing dynamics, tracking notch filters used in the feedback loop, tracking differential-notch filters, and model-based compensators. The tracking differential-notch filters were shown to have a number of advantages over more conventional approaches for both rigid-body rotor applications and flexible rotor applications such as the AMCD. Hardware implementation schemes for the tracking differential-notch filter were investigated. A simple design was developed that can be implemented with analog multipliers and low bandwidth, digital hardware.

  4. Synchronous response modelling and control of an annular momentum control device

    NASA Technical Reports Server (NTRS)

    Hockney, Richard; Johnson, Bruce G.; Misovec, Kathleen

    1988-01-01

    Research on the synchronous response modelling and control of an advanced Annular Momentun Control Device (AMCD) used to control the attitude of a spacecraft is described. For the flexible rotor AMCD, two sources of synchronous vibrations were identified. One source, which corresponds to the mass unbalance problem of rigid rotors suspended in conventional bearings, is caused by measurement errors of the rotor center of mass position. The other sources of synchronous vibrations is misalignment between the hub and flywheel masses of the AMCD. Four different control algorithms were examined. These were lead-lag compensators that mimic conventional bearing dynamics, tracking notch filters used in the feedback loop, tracking differential-notch filters, and model-based compensators. The tracking differential-notch filters were shown to have a number of advantages over more conventional approaches for both rigid-body rotor applications and flexible rotor applications such as the AMCD. Hardware implementation schemes for the tracking differential-notch filter were investigated. A simple design was developed that can be implemented with analog multipliers and low bandwidth, digital hardware.

  5. RATT: RFID Assisted Tracking Tile. Preliminary results.

    PubMed

    Quinones, Dario R; Cuevas, Aaron; Cambra, Javier; Canals, Santiago; Moratal, David

    2017-07-01

    Behavior is one of the most important aspects of animal life. This behavior depends on the link between animals, their nervous systems and their environment. In order to study the behavior of laboratory animals several tools are needed, but a tracking tool is essential to perform a thorough behavioral study. Currently, several visual tracking tools are available. However, they have some drawbacks. For instance, when an animal is inside a cave, or is close to other animals, the tracking cameras cannot always detect the location or movement of this animal. This paper presents RFID Assisted Tracking Tile (RATT), a tracking system based on passive Radio Frequency Identification (RFID) technology in high frequency band according to ISO/IEC 15693. The RATT system is composed of electronic tiles that have nine active RFID antennas attached; in addition, it contains several overlapping passive coils to improve the magnetic field characteristics. Using several tiles, a large surface can be built on which the animals can move, allowing identification and tracking of their movements. This system, that could also be combined with a visual tracking system, paves the way for complete behavioral studies.

  6. Experience-dependent plasticity from eye opening enables lasting, visual cortex-dependent enhancement of motion vision.

    PubMed

    Prusky, Glen T; Silver, Byron D; Tschetter, Wayne W; Alam, Nazia M; Douglas, Robert M

    2008-09-24

    Developmentally regulated plasticity of vision has generally been associated with "sensitive" or "critical" periods in juvenile life, wherein visual deprivation leads to loss of visual function. Here we report an enabling form of visual plasticity that commences in infant rats from eye opening, in which daily threshold testing of optokinetic tracking, amid otherwise normal visual experience, stimulates enduring, visual cortex-dependent enhancement (>60%) of the spatial frequency threshold for tracking. The perceptual ability to use spatial frequency in discriminating between moving visual stimuli is also improved by the testing experience. The capacity for inducing enhancement is transitory and effectively limited to infancy; however, enhanced responses are not consolidated and maintained unless in-kind testing experience continues uninterrupted into juvenile life. The data show that selective visual experience from infancy can alone enable visual function. They also indicate that plasticity associated with visual deprivation may not be the only cause of developmental visual dysfunction, because we found that experientially inducing enhancement in late infancy, without subsequent reinforcement of the experience in early juvenile life, can lead to enduring loss of function.

  7. Associated reactions during a visual pursuit position tracking task in hemiplegic and quadriplegic cerebral palsy.

    PubMed

    Chiu, Hsiu-Ching; Halaki, Mark; O'Dwyer, Nicholas

    2013-04-30

    Most previous studies of associated reactions (ARs) in people with cerebral palsy have used observation scales, such as recording the degree of movement through observation. The sensitive quantitative method can detect ARs that are not amply visible. The aim of this study was to provide quantitative measures of ARs during a visual pursuit position tracking task. Twenty-three hemiplegia (H) (mean +/- SD: 21y 8m +/- 11y 10m), twelve quadriplegia (Q) (21y 5m +/- 10y 3m) and twenty-two subjects with normal development (N) (21y 2m +/- 10y 10m) participated in the study. An upper limb visual pursuit tracking task was used to study ARs. The participants were required to follow a moving target with a response cursor via elbow flexion and extension movements. The occurrence of ARs was quantified by the overall coherence between the movements of tracking and non-tracking limbs and the amount of movement due to ARs was quantified by the amplitude of movement the non-tracking limbs. The amplitude of movement of the non-tracking limb indicated that the amount of ARs was larger in the Q group than the H and N groups with no significant differences between the H and N groups. The amplitude of movement of the non-tracking limb was larger during non-dominant than dominant tracking in all three groups. Some movements in the non-tracking limb were correlated with the tracking limb (correlated ARs) and some movements that were not correlated with the tracking limb (uncorrelated ARs). The correlated ARs comprised less than 40% of the total ARs for all three groups. Correlated ARs were negatively associated with clinical evaluations, but not the uncorrelated ARs. The correlated and uncorrelated ARs appear to have different relationships with clinical evaluations, implying the effect of ARs on upper limb activities could be varied.

  8. Eye-Tracking Provides a Sensitive Measure of Exploration Deficits After Acute Right MCA Stroke

    PubMed Central

    Delazer, Margarete; Sojer, Martin; Ellmerer, Philipp; Boehme, Christian; Benke, Thomas

    2018-01-01

    The eye-tracking study aimed at assessing spatial biases in visual exploration in patients after acute right MCA (middle cerebral artery) stroke. Patients affected by unilateral neglect show less functional recovery and experience severe difficulties in everyday life. Thus, accurate diagnosis is essential, and specific treatment is required. Early assessment is of high importance as rehabilitative interventions are more effective when applied soon after stroke. Previous research has shown that deficits may be overlooked when classical paper-and-pencil tasks are used for diagnosis. Conversely, eye-tracking allows direct monitoring of visual exploration patterns. We hypothesized that the analysis of eye-tracking provides more sensitive measures for spatial exploration deficits after right middle cerebral artery stroke. Twenty-two patients with right MCA stroke (median 5 days after stroke) and 28 healthy controls were included. Lesions were confirmed by MRI/CCT. Groups performed comparably in the Mini–Mental State Examination (patients and controls median 29) and in a screening of executive functions. Eleven patients scored at ceiling in neglect screening tasks, 11 showed minimal to severe signs of unilateral visual neglect. An overlap plot based on MRI and CCT imaging showed lesions in the temporo–parieto–frontal cortex, basal ganglia, and adjacent white matter tracts. Visual exploration was evaluated in two eye-tracking tasks, one assessing free visual exploration of photographs, the other visual search using symbols and letters. An index of fixation asymmetries proved to be a sensitive measure of spatial exploration deficits. Both patient groups showed a marked exploration bias to the right when looking at complex photographs. A single case analysis confirmed that also most of those patients who showed no neglect in screening tasks performed outside the range of controls in free exploration. The analysis of patients’ scoring at ceiling in neglect screening tasks is of special interest, as possible deficits may be overlooked and thus remain untreated. Our findings are in line with other studies suggesting considerable limitations of laboratory screening procedures to fully appreciate the occurrence of neglect symptoms. Future investigations are needed to explore the predictive value of the eye-tracking index and its validity in everyday situations.

  9. Study of Flexible Load Dispatch to Improve the Capacity of Wind Power Absorption

    NASA Astrophysics Data System (ADS)

    Yunlei, Yang; Shifeng, Zhang; Xiao, Chang; Da, Lei; Min, Zhang; Jinhao, Wang; Shengwen, Li; Huipeng, Li

    2017-05-01

    The dispatch method which track the trend of load demand by arranging the generation scheme of controllable hydro or thermal units faces great difficulties and challenges. With the increase of renewable energy sources such as wind power and photovoltaic power introduced to grid, system has to arrange much more spinning reserve units to compensate the unbalanced power. How to exploit the peak-shaving potential of flexible load which can be shifted with time or storage energy has become many scholars’ research direction. However, the modelling of different kinds of load and control strategy is considerably difficult, this paper choose the Air Conditioner with compressor which can storage energy in fact to study. The equivalent thermal parameters of Air Conditioner has been established. And with the use of “loop control” strategies, we can predict the regulated power of Air Conditioner. Then we established the Gen-Load optimal scheduling model including flexible load based on traditional optimal scheduling model. At last, an improved IEEE-30 case is used to verify. The result of simulation shows that flexible load can fast-track renewable power changes. More than that, with flexible load and reasonable incentive method to consumers, the operating cost of the system can be greatly cut down.

  10. Towards automated visual flexible endoscope navigation.

    PubMed

    van der Stap, Nanda; van der Heijden, Ferdinand; Broeders, Ivo A M J

    2013-10-01

    The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research. A systematic literature search was performed using three general search terms in two medical-technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included. Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date. Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process.

  11. Multi-track financing.

    PubMed

    Kennedy, Steven W; Randolph, John; Taddey, Anthony J

    2012-05-01

    In today's uncertain economic environment, when seeking to finance a capital project, healthcare borrowers should adopt a multi-tracked funding strategy that permits them to change capital-funding routes quickly in response to changing circumstances. The multi-tracking process requires two stages prior to securing a commitment and beginning the closing process: due diligence and indication of interest. This process should present no material additional cost during these two stages, giving healthcare borrowers the flexibility to explore a variety of financing options.

  12. Visual Target Tracking on the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Kim, Won; Biesiadecki, Jeffrey; Ali, Khaled

    2008-01-01

    Visual target tracking (VTT) software has been incorporated into Release 9.2 of the Mars Exploration Rover (MER) flight software, now running aboard the rovers Spirit and Opportunity. In the VTT operation (see figure), the rover is driven in short steps between stops and, at each stop, still images are acquired by actively aimed navigation cameras (navcams) on a mast on the rover (see artistic rendition). The VTT software processes the digitized navcam images so as to track a target reliably and to make it possible to approach the target accurately to within a few centimeters over a 10-m traverse.

  13. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    PubMed

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.

  14. Evaluation of tactual displays for flight control

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Tanner, R. B.; Triggs, T. J.

    1973-01-01

    Manual tracking experiments were conducted to determine the suitability of tactual displays for presenting flight-control information in multitask situations. Although tracking error scores are considerably greater than scores obtained with a continuous visual display, preliminary results indicate that inter-task interference effects are substantially less with the tactual display in situations that impose high visual scanning workloads. The single-task performance degradation found with the tactual display appears to be a result of the coding scheme rather than the use of the tactual sensory mode per se. Analysis with the state-variable pilot/vehicle model shows that reliable predictions of tracking errors can be obtained for wide-band tracking systems once the pilot-related model parameters have been adjusted to reflect the pilot-display interaction.

  15. The influence of successive matches on match-running performance during an under-23 international soccer tournament: The necessity of individual analysis.

    PubMed

    Varley, Matthew C; Di Salvo, Valter; Modonutti, Mattia; Gregson, Warren; Mendez-Villanueva, Alberto

    2018-03-01

    This study investigated the effects of successive matches on match-running in elite under-23 soccer players during an international tournament. Match-running data was collected using a semi-automated multi-camera tracking system during an international under-23 tournament from all participating outfield players. Players who played 100% of all group stage matches were included (3 matches separated by 72 h, n = 44). Differences in match-running performance between matches were identified using a generalised linear mixed model. There were no clear effects for total, walking, jogging, running, high-speed running and sprinting distance between matches 1 and 3 (effect size (ES); -0.32 to 0.05). Positional analysis found that sprint distance was largely maintained from matches 1 to 3 across all positions. Attackers had a moderate decrease in total, jogging and running distance between matches 1 and 3 (ES; -0.72 to -0.66). Classifying players as increasers or decreasers in match-running revealed that match-running changes are susceptible to individual differences. Sprint performance appears to be maintained over successive matches regardless of playing position. However, reductions in other match-running categories vary between positions. Changes in match-running over successive matches affect individuals differently; thus, players should be monitored on an individual basis.

  16. Dye-enhanced visualization of rat whiskers for behavioral studies.

    PubMed

    Rigosa, Jacopo; Lucantonio, Alessandro; Noselli, Giovanni; Fassihi, Arash; Zorzin, Erik; Manzino, Fabrizio; Pulecchi, Francesca; Diamond, Mathew E

    2017-06-14

    Visualization and tracking of the facial whiskers is required in an increasing number of rodent studies. Although many approaches have been employed, only high-speed videography has proven adequate for measuring whisker motion and deformation during interaction with an object. However, whisker visualization and tracking is challenging for multiple reasons, primary among them the low contrast of the whisker against its background. Here, we demonstrate a fluorescent dye method suitable for visualization of one or more rat whiskers. The process makes the dyed whisker(s) easily visible against a dark background. The coloring does not influence the behavioral performance of rats trained on a vibrissal vibrotactile discrimination task, nor does it affect the whiskers' mechanical properties.

  17. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    NASA Astrophysics Data System (ADS)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  18. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos.

    PubMed

    Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.

  19. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos

    PubMed Central

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421

  20. Detection of Ballast Damage by In-Situ Vibration Measurement of Sleepers

    NASA Astrophysics Data System (ADS)

    Lam, H. F.; Wong, M. T.; Keefe, R. M.

    2010-05-01

    Ballasted track is one of the most important elements of railway transportation systems worldwide. Owing to its importance in railway safety, many monitoring and evaluation methods have been developed. Current railway track monitoring systems are comprehensive, fast and efficient in testing railway track level and alignment, rail gauge, rail corrugation, etc. However, the monitoring of ballast condition still relies very much on visual inspection and core tests. Although extensive research has been carried out in the development of non-destructive methods for ballast condition evaluation, a commonly accepted and cost-effective method is still in demand. In Hong Kong practice, if abnormal train vibration is reported by the train operator or passengers, permanent way inspectors will locate the problem area by track geometry measurement. It must be pointed out that visual inspection can only identify ballast damage on the track surface, the track geometry deficiencies and rail twists can be detected using a track gauge. Ballast damage under the sleeper loading area and the ballast shoulder, which are the main factors affecting track stability and ride quality, are extremely difficult if not impossible to be detected by visual inspection. Core test is a destructive test, which is expensive, time consuming and may be disruptive to traffic. A fast real-time ballast damage detection method that can be implemented by permanent way inspectors with simple equipment can certainly provide valuable information for engineers in assessing the safety and riding quality of ballasted track systems. The main objective of this paper is to study the feasibility in using the vibration characteristics of sleepers in quantifying the ballast condition under the sleepers, and so as to explore the possibility in developing a handy method for the detection of ballast damage based on the measured vibration of sleepers.

  1. Insect cyborgs: a new frontier in flight control systems

    NASA Astrophysics Data System (ADS)

    Reissman, Timothy; Crawford, Jackie H.; Garcia, Ephrahim

    2007-04-01

    The development of a micro-UAV via a cybernetic organism, primarily the Manduca sexta moth, is presented. An observer to gather output data of the system response of the moth is given by means of an image following system. The visual tracking was implemented to gather the required information about the time history of the moth's six degrees of freedom. This was performed with three cameras tracking a white line as a marker on the moth's thorax to maximize contrast between the moth and the marker. Evaluation of the implemented six degree of freedom visual tracking system finds precision greater than 0.1 mm within three standard deviations and accuracy on the order of 1 mm. Acoustic and visual response systems are presented to lay the groundwork for creating a stochastic response catalog of the organisms to varied stimuli.

  2. Fixed-base simulator study of the effect of time delays in visual cues on pilot tracking performance

    NASA Technical Reports Server (NTRS)

    Queijo, M. J.; Riley, D. R.

    1975-01-01

    Factors were examined which determine the amount of time delay acceptable in the visual feedback loop in flight simulators. Acceptable time delays are defined as delays which significantly affect neither the results nor the manner in which the subject 'flies' the simulator. The subject tracked a target aircraft as it oscillated sinusoidally in a vertical plane only. The pursuing aircraft was permitted five degrees of freedom. Time delays of from 0.047 to 0.297 second were inserted in the visual feedback loop. A side task was employed to maintain the workload constant and to insure that the pilot was fully occupied during the experiment. Tracking results were obtained for 17 aircraft configurations having different longitudinal short-period characteristics. Results show a positive correlation between improved handling qualities and a longer acceptable time delay.

  3. Intermittently-visual Tracking Experiments Reveal the Roles of Error-correction and Predictive Mechanisms in the Human Visual-motor Control System

    NASA Astrophysics Data System (ADS)

    Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji

    Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

  4. Perceptual training yields rapid improvements in visually impaired youth

    PubMed Central

    Nyquist, Jeffrey B.; Lappin, Joseph S.; Zhang, Ruyuan; Tadin, Duje

    2016-01-01

    Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events. PMID:27901026

  5. Three-Dimensional Visualization of Particle Tracks.

    ERIC Educational Resources Information Center

    Julian, Glenn M.

    1993-01-01

    Suggests ways to bring home to the introductory physics student some of the excitement of recent discoveries in particle physics. Describes particle detectors and encourages the use of the Standard Model along with real images of particle tracks to determine three-dimensional views of tracks. (MVL)

  6. Neotracking in North Carolina: How High School Courses of Study Reproduce Race and Class-Based Stratification

    ERIC Educational Resources Information Center

    Mickelson, Roslyn Arlin; Everett, Bobbie J.

    2008-01-01

    Background/Context: This article describes neotracking, a new form of tracking in North Carolina that is the outgrowth of the state's reformed curricular standards, the High School Courses of Study Framework (COS). Neotracking combines older versions of rigid, comprehensive tracking with the newer, more flexible within-subject area curricular…

  7. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  8. Attentional enhancement during multiple-object tracking.

    PubMed

    Drew, Trafton; McCollough, Andrew W; Horowitz, Todd S; Vogel, Edward K

    2009-04-01

    What is the role of attention in multiple-object tracking? Does attention enhance target representations, suppress distractor representations, or both? It is difficult to ask this question in a purely behavioral paradigm without altering the very attentional allocation one is trying to measure. In the present study, we used event-related potentials to examine the early visual evoked responses to task-irrelevant probes without requiring an additional detection task. Subjects tracked two targets among four moving distractors and four stationary distractors. Brief probes were flashed on targets, moving distractors, stationary distractors, or empty space. We obtained a significant enhancement of the visually evoked P1 and N1 components (approximately 100-150 msec) for probes on targets, relative to distractors. Furthermore, good trackers showed larger differences between target and distractor probes than did poor trackers. These results provide evidence of early attentional enhancement of tracked target items and also provide a novel approach to measuring attentional allocation during tracking.

  9. The effect of attention loading on the inhibition of choice reaction time to visual motion by concurrent rotary motion

    NASA Technical Reports Server (NTRS)

    Looper, M.

    1976-01-01

    This study investigates the influence of attention loading on the established intersensory effects of passive bodily rotation on choice reaction time (RT) to visual motion. Subjects sat at the center of rotation in an enclosed rotating chamber and observed an oscilloscope on which were, in the center, a tracking display and, 10 deg left of center, a RT line. Three tracking tasks and a no-tracking control condition were presented to all subjects in combination with the RT task, which occurred with and without concurrent cab rotations. Choice RT to line motions was inhibited (probability less than .001) both when there was simultaneous vestibular stimulation and when there was a tracking task; response latencies lengthened progressively with increased similarity between the RT and tracking tasks. However, the attention conditions did not affect the intersensory effect; the significance of this for the nature of the sensory interaction is discussed.

  10. The visual information system

    Treesearch

    Merlyn J. Paulson

    1979-01-01

    This paper outlines a project level process (V.I.S.) which utilizes very accurate and flexible computer algorithms in combination with contemporary site analysis and design techniques for visual evaluation, design and management. The process provides logical direction and connecting bridges through problem identification, information collection and verification, visual...

  11. SynapticDB, effective web-based management and sharing of data from serial section electron microscopy.

    PubMed

    Shi, Bitao; Bourne, Jennifer; Harris, Kristen M

    2011-03-01

    Serial section electron microscopy (ssEM) is rapidly expanding as a primary tool to investigate synaptic circuitry and plasticity. The ultrastructural images collected through ssEM are content rich and their comprehensive analysis is beyond the capacity of an individual laboratory. Hence, sharing ultrastructural data is becoming crucial to visualize, analyze, and discover the structural basis of synaptic circuitry and function in the brain. We devised a web-based management system called SynapticDB (http://synapses.clm.utexas.edu/synapticdb/) that catalogues, extracts, analyzes, and shares experimental data from ssEM. The management strategy involves a library with check-in, checkout and experimental tracking mechanisms. We developed a series of spreadsheet templates (MS Excel, Open Office spreadsheet, etc) that guide users in methods of data collection, structural identification, and quantitative analysis through ssEM. SynapticDB provides flexible access to complete templates, or to individual columns with instructional headers that can be selected to create user-defined templates. New templates can also be generated and uploaded. Research progress is tracked via experimental note management and dynamic PDF forms that allow new investigators to follow standard protocols and experienced researchers to expand the range of data collected and shared. The combined use of templates and tracking notes ensures that the supporting experimental information is populated into the database and associated with the appropriate ssEM images and analyses. We anticipate that SynapticDB will serve future meta-analyses towards new discoveries about the composition and circuitry of neurons and glia, and new understanding about structural plasticity during development, behavior, learning, memory, and neuropathology.

  12. Analysis of a multi-wavelength multi-camera phase-shifting profilometric system for real-time operation

    NASA Astrophysics Data System (ADS)

    Stoykova, Elena; Gotchev, Atanas; Sainov, Ventseslav

    2011-01-01

    Real-time accomplishment of a phase-shifting profilometry through simultaneous projection and recording of fringe patterns requires a reliable phase retrieval procedure. In the present work we consider a four-wavelength multi-camera system with four sinusoidal phase gratings for pattern projection that implements a four-step algorithm. Successful operation of the system depends on overcoming two challenges which stem out from the inherent limitations of the phase-shifting algorithm, namely the demand for a sinusoidal fringe profile and the necessity to ensure equal background and contrast of fringes in the recorded fringe patterns. As a first task, we analyze the systematic errors due to the combined influence of the higher harmonics and multi-wavelength illumination in the Fresnel diffraction zone considering the case when the modulation parameters of the four gratings are different. As a second task we simulate the system performance to evaluate the degrading effect of the speckle noise and the spatially varying fringe modulation at non-uniform illumination on the overall accuracy of the profilometric measurement. We consider the case of non-correlated speckle realizations in the recorded fringe patterns due to four-wavelength illumination. Finally, we apply a phase retrieval procedure which includes normalization, background removal and denoising of the recorded fringe patterns to both simulated and measured data obtained for a dome surface.

  13. What triggers catch-up saccades during visual tracking?

    PubMed

    de Brouwer, Sophie; Yuksel, Demet; Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe

    2002-03-01

    When tracking moving visual stimuli, primates orient their visual axis by combining two kinds of eye movements, smooth pursuit and saccades, that have very different dynamics. Yet, the mechanisms that govern the decision to switch from one type of eye movement to the other are still poorly understood, even though they could bring a significant contribution to the understanding of how the CNS combines different kinds of control strategies to achieve a common motor and sensory goal. In this study, we investigated the oculomotor responses to a large range of different combinations of position error and velocity error during visual tracking of moving stimuli in humans. We found that the oculomotor system uses a prediction of the time at which the eye trajectory will cross the target, defined as the "eye crossing time" (T(XE)). The eye crossing time, which depends on both position error and velocity error, is the criterion used to switch between smooth and saccadic pursuit, i.e., to trigger catch-up saccades. On average, for T(XE) between 40 and 180 ms, no saccade is triggered and target tracking remains purely smooth. Conversely, when T(XE) becomes smaller than 40 ms or larger than 180 ms, a saccade is triggered after a short latency (around 125 ms).

  14. Online Multi-Modal Robust Non-Negative Dictionary Learning for Visual Tracking

    PubMed Central

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality. PMID:25961715

  15. Online multi-modal robust non-negative dictionary learning for visual tracking.

    PubMed

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  16. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model

    PubMed Central

    Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal

    2016-01-01

    In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769

  17. Dynamic Trial-by-Trial Recoding of Task-Set Representations in the Frontoparietal Cortex Mediates Behavioral Flexibility

    PubMed Central

    Qiao, Lei; Zhang, Lijie

    2017-01-01

    Cognitive flexibility forms the core of the extraordinary ability of humans to adapt, but the precise neural mechanisms underlying our ability to nimbly shift between task sets remain poorly understood. Recent functional magnetic resonance imaging (fMRI) studies employing multivoxel pattern analysis (MVPA) have shown that a currently relevant task set can be decoded from activity patterns in the frontoparietal cortex, but whether these regions support the dynamic transformation of task sets from trial to trial is not clear. Here, we combined a cued task-switching protocol with human (both sexes) fMRI, and harnessed representational similarity analysis (RSA) to facilitate a novel assessment of trial-by-trial changes in neural task-set representations. We first used MVPA to define task-sensitive frontoparietal and visual regions and found that neural task-set representations on switch trials are less stably encoded than on repeat trials. We then exploited RSA to show that the neural representational pattern dissimilarity across consecutive trials is greater for switch trials than for repeat trials, and that the degree of this pattern dissimilarity predicts behavior. Moreover, the overall neural pattern of representational dissimilarities followed from the assumption that repeating sets, compared with switching sets, results in stronger neural task representations. Finally, when moving from cue to target phase within a trial, pattern dissimilarities tracked the transformation from previous-trial task representations to the currently relevant set. These results provide neural evidence for the longstanding assumptions of an effortful task-set reconfiguration process hampered by task-set inertia, and they demonstrate that frontoparietal and stimulus processing regions support “dynamic adaptive coding,” flexibly representing changing task sets in a trial-by-trial fashion. SIGNIFICANCE STATEMENT Humans can fluently switch between different tasks, reflecting an ability to dynamically configure “task sets,” rule representations that link stimuli to appropriate responses. Recent studies show that neural signals in frontal and parietal brain regions can tell us which of two tasks a person is currently performing. However, it is not known whether these regions are also involved in dynamically reconfiguring task-set representations when switching between tasks. Here we measured human brain activity during task switching and tracked the similarity of neural task-set representations from trial to trial. We show that frontal and parietal brain regions flexibly recode changing task sets in a trial-by-trial fashion, and that task-set similarity over consecutive trials predicts behavior. PMID:28972126

  18. STAR: an integrated solution to management and visualization of sequencing data

    PubMed Central

    Wang, Tao; Liu, Jie; Shen, Li; Tonti-Filippini, Julian; Zhu, Yun; Jia, Haiyang; Lister, Ryan; Whitaker, John W.; Ecker, Joseph R.; Millar, A. Harvey; Ren, Bing; Wang, Wei

    2013-01-01

    Motivation: Easily visualization of complex data features is a necessary step to conduct studies on next-generation sequencing (NGS) data. We developed STAR, an integrated web application that enables online management, visualization and track-based analysis of NGS data. Results: STAR is a multilayer web service system. On the client side, STAR leverages JavaScript, HTML5 Canvas and asynchronous communications to deliver a smoothly scrolling desktop-like graphical user interface with a suite of in-browser analysis tools that range from providing simple track configuration controls to sophisticated feature detection within datasets. On the server side, STAR supports private session state retention via an account management system and provides data management modules that enable collection, visualization and analysis of third-party sequencing data from the public domain with over thousands of tracks hosted to date. Overall, STAR represents a next-generation data exploration solution to match the requirements of NGS data, enabling both intuitive visualization and dynamic analysis of data. Availability and implementation: STAR browser system is freely available on the web at http://wanglab.ucsd.edu/star/browser and https://github.com/angell1117/STAR-genome-browser. Contact: wei-wang@ucsd.edu PMID:24078702

  19. Visualization of spatial-temporal data based on 3D virtual scene

    NASA Astrophysics Data System (ADS)

    Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang

    2009-10-01

    The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.

  20. Quantifying Novice and Expert Differences in Visual Diagnostic Reasoning in Veterinary Pathology Using Eye-Tracking Technology.

    PubMed

    Warren, Amy L; Donnon, Tyrone L; Wagg, Catherine R; Priest, Heather; Fernandez, Nicole J

    2018-01-18

    Visual diagnostic reasoning is the cognitive process by which pathologists reach a diagnosis based on visual stimuli (cytologic, histopathologic, or gross imagery). Currently, there is little to no literature examining visual reasoning in veterinary pathology. The objective of the study was to use eye tracking to establish baseline quantitative and qualitative differences between the visual reasoning processes of novice and expert veterinary pathologists viewing cytology specimens. Novice and expert participants were each shown 10 cytology images and asked to formulate a diagnosis while wearing eye-tracking equipment (10 slides) and while concurrently verbalizing their thought processes using the think-aloud protocol (5 slides). Compared to novices, experts demonstrated significantly higher diagnostic accuracy (p<.017), shorter time to diagnosis (p<.017), and a higher percentage of time spent viewing areas of diagnostic interest (p<.017). Experts elicited more key diagnostic features in the think-aloud protocol and had more efficient patterns of eye movement. These findings suggest that experts' fast time to diagnosis, efficient eye-movement patterns, and preference for viewing areas of interest supports system 1 (pattern-recognition) reasoning and script-inductive knowledge structures with system 2 (analytic) reasoning to verify their diagnosis.

  1. Tracking the evolution of crossmodal plasticity and visual functions before and after sight restoration

    PubMed Central

    Dormal, Giulia; Lepore, Franco; Harissi-Dagher, Mona; Albouy, Geneviève; Bertone, Armando; Rossion, Bruno

    2014-01-01

    Visual deprivation leads to massive reorganization in both the structure and function of the occipital cortex, raising crucial challenges for sight restoration. We tracked the behavioral, structural, and neurofunctional changes occurring in an early and severely visually impaired patient before and 1.5 and 7 mo after sight restoration with magnetic resonance imaging. Robust presurgical auditory responses were found in occipital cortex despite residual preoperative vision. In primary visual cortex, crossmodal auditory responses overlapped with visual responses and remained elevated even 7 mo after surgery. However, these crossmodal responses decreased in extrastriate occipital regions after surgery, together with improved behavioral vision and with increases in both gray matter density and neural activation in low-level visual regions. Selective responses in high-level visual regions involved in motion and face processing were observable even before surgery and did not evolve after surgery. Taken together, these findings demonstrate that structural and functional reorganization of occipital regions are present in an individual with a long-standing history of severe visual impairment and that such reorganizations can be partially reversed by visual restoration in adulthood. PMID:25520432

  2. How Revisions to Mathematical Visuals Affect Cognition: Evidence from Eye Tracking

    ERIC Educational Resources Information Center

    Clinton, Virginia; Cooper, Jennifer L.; Michaelis, Joseph; Alibali, Martha W.; Nathan, Mitchell J.

    2017-01-01

    Mathematics curricula are frequently rich with visuals, but these visuals are often not designed for optimal use of students' limited cognitive resources. The authors of this study revised the visuals in a mathematics lesson based on instructional design principles. The purpose of this study is to examine the effects of these revised visuals on…

  3. Constraints on Multiple Object Tracking in Williams Syndrome: How Atypical Development Can Inform Theories of Visual Processing

    ERIC Educational Resources Information Center

    Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara

    2016-01-01

    The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…

  4. Visual tracking using objectness-bounding box regression and correlation filters

    NASA Astrophysics Data System (ADS)

    Mbelwa, Jimmy T.; Zhao, Qingjie; Lu, Yao; Wang, Fasheng; Mbise, Mercy

    2018-03-01

    Visual tracking is a fundamental problem in computer vision with extensive application domains in surveillance and intelligent systems. Recently, correlation filter-based tracking methods have shown a great achievement in terms of robustness, accuracy, and speed. However, such methods have a problem of dealing with fast motion (FM), motion blur (MB), illumination variation (IV), and drifting caused by occlusion (OCC). To solve this problem, a tracking method that integrates objectness-bounding box regression (O-BBR) model and a scheme based on kernelized correlation filter (KCF) is proposed. The scheme based on KCF is used to improve the tracking performance of FM and MB. For handling drift problem caused by OCC and IV, we propose objectness proposals trained in bounding box regression as prior knowledge to provide candidates and background suppression. Finally, scheme KCF as a base tracker and O-BBR are fused to obtain a state of a target object. Extensive experimental comparisons of the developed tracking method with other state-of-the-art trackers are performed on some of the challenging video sequences. Experimental comparison results show that our proposed tracking method outperforms other state-of-the-art tracking methods in terms of effectiveness, accuracy, and robustness.

  5. Bird Radar Validation in the Field by Time-Referencing Line-Transect Surveys

    PubMed Central

    Dokter, Adriaan M.; Baptist, Martin J.; Ens, Bruno J.; Krijgsveld, Karen L.; van Loon, E. Emiel

    2013-01-01

    Track-while-scan bird radars are widely used in ornithological studies, but often the precise detection capabilities of these systems are unknown. Quantification of radar performance is essential to avoid observational biases, which requires practical methods for validating a radar’s detection capability in specific field settings. In this study a method to quantify the detection capability of a bird radar is presented, as well a demonstration of this method in a case study. By time-referencing line-transect surveys, visually identified birds were automatically linked to individual tracks using their transect crossing time. Detection probabilities were determined as the fraction of the total set of visual observations that could be linked to radar tracks. To avoid ambiguities in assigning radar tracks to visual observations, the observer’s accuracy in determining a bird’s transect crossing time was taken into account. The accuracy was determined by examining the effect of a time lag applied to the visual observations on the number of matches found with radar tracks. Effects of flight altitude, distance, surface substrate and species size on the detection probability by the radar were quantified in a marine intertidal study area. Detection probability varied strongly with all these factors, as well as species-specific flight behaviour. The effective detection range for single birds flying at low altitude for an X-band marine radar based system was estimated at ∼1.5 km. Within this range the fraction of individual flying birds that were detected by the radar was 0.50±0.06 with a detection bias towards higher flight altitudes, larger birds and high tide situations. Besides radar validation, which we consider essential when quantification of bird numbers is important, our method of linking radar tracks to ground-truthed field observations can facilitate species-specific studies using surveillance radars. The methodology may prove equally useful for optimising tracking algorithms. PMID:24066103

  6. Bird radar validation in the field by time-referencing line-transect surveys.

    PubMed

    Dokter, Adriaan M; Baptist, Martin J; Ens, Bruno J; Krijgsveld, Karen L; van Loon, E Emiel

    2013-01-01

    Track-while-scan bird radars are widely used in ornithological studies, but often the precise detection capabilities of these systems are unknown. Quantification of radar performance is essential to avoid observational biases, which requires practical methods for validating a radar's detection capability in specific field settings. In this study a method to quantify the detection capability of a bird radar is presented, as well a demonstration of this method in a case study. By time-referencing line-transect surveys, visually identified birds were automatically linked to individual tracks using their transect crossing time. Detection probabilities were determined as the fraction of the total set of visual observations that could be linked to radar tracks. To avoid ambiguities in assigning radar tracks to visual observations, the observer's accuracy in determining a bird's transect crossing time was taken into account. The accuracy was determined by examining the effect of a time lag applied to the visual observations on the number of matches found with radar tracks. Effects of flight altitude, distance, surface substrate and species size on the detection probability by the radar were quantified in a marine intertidal study area. Detection probability varied strongly with all these factors, as well as species-specific flight behaviour. The effective detection range for single birds flying at low altitude for an X-band marine radar based system was estimated at ~1.5 km. Within this range the fraction of individual flying birds that were detected by the radar was 0.50 ± 0.06 with a detection bias towards higher flight altitudes, larger birds and high tide situations. Besides radar validation, which we consider essential when quantification of bird numbers is important, our method of linking radar tracks to ground-truthed field observations can facilitate species-specific studies using surveillance radars. The methodology may prove equally useful for optimising tracking algorithms.

  7. Visual servoing in medical robotics: a survey. Part I: endoscopic and direct vision imaging - techniques and applications.

    PubMed

    Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V

    2014-09-01

    Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Object tracking based on harmony search: comparative study

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; He, Xiao-Hai; Luo, Dai-Sheng; Yu, Yan-Mei

    2012-10-01

    Visual tracking can be treated as an optimization problem. A new meta-heuristic optimal algorithm, Harmony Search (HS), was first applied to perform visual tracking by Fourie et al. As the authors point out, many subjects are still required in ongoing research. Our work is a continuation of Fourie's study, with four prominent improved variations of HS, namely Improved Harmony Search (IHS), Global-best Harmony Search (GHS), Self-adaptive Harmony Search (SHS) and Differential Harmony Search (DHS) adopted into the tracking system. Their performances are tested and analyzed on multiple challenging video sequences. Experimental results show that IHS is best, with DHS ranking second among the four improved trackers when the iteration number is small. However, the differences between all four reduced gradually, along with the increasing number of iterations.

  9. Visual Target Tracking on the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Biesiadecki, Jeffrey J.; Ali, Khaled S.

    2008-01-01

    Visual Target Tracking (VTT) has been implemented in the new Mars Exploration Rover (MER) Flight Software (FSW) R9.2 release, which is now running on both Spirit and Opportunity rovers. Applying the normalized cross-correlation (NCC) algorithm with template image magnification and roll compensation on MER Navcam images, VTT tracks the target and enables the rover to approach the target within a few cm over a 10 m traverse. Each VTT update takes 1/2 to 1 minute on the rovers, 2-3 times faster than one Visual Odometry (Visodom) update. VTT is a key element to achieve a target approach and instrument placement over a 10-m run in a single sol in contrast to the original baseline of 3 sols. VTT has been integrated into the MER FSW so that it can operate with any combination of blind driving, Autonomous Navigation (Autonav) with hazard avoidance, and Visodom. VTT can either guide the rover towards the target or simply image the target as the rover drives by. Three recent VTT operational checkouts on Opportunity were all successful, tracking the selected target reliably within a few pixels.

  10. Range-dependent flexibility in the acoustic field of view of echolocating porpoises (Phocoena phocoena)

    PubMed Central

    Wisniewska, Danuta M; Ratcliffe, John M; Beedholm, Kristian; Christensen, Christian B; Johnson, Mark; Koblitz, Jens C; Wahlberg, Magnus; Madsen, Peter T

    2015-01-01

    Toothed whales use sonar to detect, locate, and track prey. They adjust emitted sound intensity, auditory sensitivity and click rate to target range, and terminate prey pursuits with high-repetition-rate, low-intensity buzzes. However, their narrow acoustic field of view (FOV) is considered stable throughout target approach, which could facilitate prey escape at close-range. Here, we show that, like some bats, harbour porpoises can broaden their biosonar beam during the terminal phase of attack but, unlike bats, maintain the ability to change beamwidth within this phase. Based on video, MRI, and acoustic-tag recordings, we propose this flexibility is modulated by the melon and implemented to accommodate dynamic spatial relationships with prey and acoustic complexity of surroundings. Despite independent evolution and different means of sound generation and transmission, whales and bats adaptively change their FOV, suggesting that beamwidth flexibility has been an important driver in the evolution of echolocation for prey tracking. DOI: http://dx.doi.org/10.7554/eLife.05651.001 PMID:25793440

  11. Range-dependent flexibility in the acoustic field of view of echolocating porpoises (Phocoena phocoena).

    PubMed

    Wisniewska, Danuta M; Ratcliffe, John M; Beedholm, Kristian; Christensen, Christian B; Johnson, Mark; Koblitz, Jens C; Wahlberg, Magnus; Madsen, Peter T

    2015-03-20

    Toothed whales use sonar to detect, locate, and track prey. They adjust emitted sound intensity, auditory sensitivity and click rate to target range, and terminate prey pursuits with high-repetition-rate, low-intensity buzzes. However, their narrow acoustic field of view (FOV) is considered stable throughout target approach, which could facilitate prey escape at close-range. Here, we show that, like some bats, harbour porpoises can broaden their biosonar beam during the terminal phase of attack but, unlike bats, maintain the ability to change beamwidth within this phase. Based on video, MRI, and acoustic-tag recordings, we propose this flexibility is modulated by the melon and implemented to accommodate dynamic spatial relationships with prey and acoustic complexity of surroundings. Despite independent evolution and different means of sound generation and transmission, whales and bats adaptively change their FOV, suggesting that beamwidth flexibility has been an important driver in the evolution of echolocation for prey tracking.

  12. Passivity/Lyapunov based controller design for trajectory tracking of flexible joint manipulators

    NASA Technical Reports Server (NTRS)

    Sicard, Pierre; Wen, John T.; Lanari, Leonardo

    1992-01-01

    A passivity and Lyapunov based approach for the control design for the trajectory tracking problem of flexible joint robots is presented. The basic structure of the proposed controller is the sum of a model-based feedforward and a model-independent feedback. Feedforward selection and solution is analyzed for a general model for flexible joints, and for more specific and practical model structures. Passivity theory is used to design a motor state-based controller in order to input-output stabilize the error system formed by the feedforward. Observability conditions for asymptotic stability are stated and verified. In order to accommodate for modeling uncertainties and to allow for the implementation of a simplified feedforward compensation, the stability of the system is analyzed in presence of approximations in the feedforward by using a Lyapunov based robustness analysis. It is shown that under certain conditions, e.g., the desired trajectory is varying slowly enough, stability is maintained for various approximations of a canonical feedforward.

  13. The functional architecture of the ventral temporal cortex and its role in categorization

    PubMed Central

    Grill-Spector, Kalanit; Weiner, Kevin S.

    2014-01-01

    Visual categorization is thought to occur in the human ventral temporal cortex (VTC), but how this categorization is achieved is still largely unknown. In this Review, we consider the computations and representations that are necessary for categorization and examine how the microanatomical and macroanatomical layout of the VTC might optimize them to achieve rapid and flexible visual categorization. We propose that efficient categorization is achieved by organizing representations in a nested spatial hierarchy in the VTC. This spatial hierarchy serves as a neural infrastructure for the representational hierarchy of visual information in the VTC and thereby enables flexible access to category information at several levels of abstraction. PMID:24962370

  14. To speak or not to speak - A multiple resource perspective

    NASA Technical Reports Server (NTRS)

    Tsang, P. S.; Hartzell, E. J.; Rothschild, R. A.

    1985-01-01

    The desirability of employing speech response in a dynamic dual task situation was discussed from a multiple resource perspective. A secondary task technique was employed to examine the time-sharing performance of five dual tasks with various degrees of resource overlap according to the structure-specific resource model of Wickens (1980). The primary task was a visual/manual tracking task which required spatial processing. The secondary task was either another tracking task or a spatial transformation task with one of four input (visual or auditory) and output (manual or speech) configurations. The results show that the dual task performance was best when the primary tracking task was paired with the visual/speech transformation task. This finding was explained by an interaction of the stimulus-central processing-response compatibility of the transformation task and the degree of resource competition between the time-shared tasks. Implications on the utility of speech response were discussed.

  15. Universal Ontology: Attentive Tracking of Objects and Substances across Languages and over Development

    ERIC Educational Resources Information Center

    Cacchione, Trix; Indino, Marcello; Fujita, Kazuo; Itakura, Shoji; Matsuno, Toyomi; Schaub, Simone; Amici, Federica

    2014-01-01

    Previous research has demonstrated that adults are successful at visually tracking rigidly moving items, but experience great difficulties when tracking substance-like "pouring" items. Using a comparative approach, we investigated whether the presence/absence of the grammatical count-mass distinction influences adults and children's…

  16. Visual servoing for a US-guided therapeutic HIFU system by coagulated lesion tracking: a phantom study.

    PubMed

    Seo, Joonho; Koizumi, Norihiro; Funamoto, Takakazu; Sugita, Naohiko; Yoshinaka, Kiyoshi; Nomiya, Akira; Homma, Yukio; Matsumoto, Yoichiro; Mitsuishi, Mamoru

    2011-06-01

    Applying ultrasound (US)-guided high-intensity focused ultrasound (HIFU) therapy for kidney tumours is currently very difficult, due to the unclearly observed tumour area and renal motion induced by human respiration. In this research, we propose new methods by which to track the indistinct tumour area and to compensate the respiratory tumour motion for US-guided HIFU treatment. For tracking indistinct tumour areas, we detect the US speckle change created by HIFU irradiation. In other words, HIFU thermal ablation can coagulate tissue in the tumour area and an intraoperatively created coagulated lesion (CL) is used as a spatial landmark for US visual tracking. Specifically, the condensation algorithm was applied to robust and real-time CL speckle pattern tracking in the sequence of US images. Moreover, biplanar US imaging was used to locate the three-dimensional position of the CL, and a three-actuator system drives the end-effector to compensate for the motion. Finally, we tested the proposed method by using a newly devised phantom model that enables both visual tracking and a thermal response by HIFU irradiation. In the experiment, after generation of the CL in the phantom kidney, the end-effector successfully synchronized with the phantom motion, which was modelled by the captured motion data for the human kidney. The accuracy of the motion compensation was evaluated by the error between the end-effector and the respiratory motion, the RMS error of which was approximately 2 mm. This research shows that a HIFU-induced CL provides a very good landmark for target motion tracking. By using the CL tracking method, target motion compensation can be realized in the US-guided robotic HIFU system. Copyright © 2011 John Wiley & Sons, Ltd.

  17. Covert enaction at work: Recording the continuous movements of visuospatial attention to visible or imagined targets by means of Steady-State Visual Evoked Potentials (SSVEPs).

    PubMed

    Gregori Grgič, Regina; Calore, Enrico; de'Sperati, Claudio

    2016-01-01

    Whereas overt visuospatial attention is customarily measured with eye tracking, covert attention is assessed by various methods. Here we exploited Steady-State Visual Evoked Potentials (SSVEPs) - the oscillatory responses of the visual cortex to incoming flickering stimuli - to record the movements of covert visuospatial attention in a way operatively similar to eye tracking (attention tracking), which allowed us to compare motion observation and motion extrapolation with and without eye movements. Observers fixated a central dot and covertly tracked a target oscillating horizontally and sinusoidally. In the background, the left and the right halves of the screen flickered at two different frequencies, generating two SSVEPs in occipital regions whose size varied reciprocally as observers attended to the moving target. The two signals were combined into a single quantity that was modulated at the target frequency in a quasi-sinusoidal way, often clearly visible in single trials. The modulation continued almost unchanged when the target was switched off and observers mentally extrapolated its motion in imagery, and also when observers pointed their finger at the moving target during covert tracking, or imagined doing so. The amplitude of modulation during covert tracking was ∼25-30% of that measured when observers followed the target with their eyes. We used 4 electrodes in parieto-occipital areas, but similar results were achieved with a single electrode in Oz. In a second experiment we tested ramp and step motion. During overt tracking, SSVEPs were remarkably accurate, showing both saccadic-like and smooth pursuit-like modulations of cortical responsiveness, although during covert tracking the modulation deteriorated. Covert tracking was better with sinusoidal motion than ramp motion, and better with moving targets than stationary ones. The clear modulation of cortical responsiveness recorded during both overt and covert tracking, identical for motion observation and motion extrapolation, suggests to include covert attention movements in enactive theories of mental imagery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Person and gesture tracking with smart stereo cameras

    NASA Astrophysics Data System (ADS)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.

  19. Scene-Aware Adaptive Updating for Visual Tracking via Correlation Filters

    PubMed Central

    Zhang, Sirou; Qiao, Xiaoya

    2017-01-01

    In recent years, visual object tracking has been widely used in military guidance, human-computer interaction, road traffic, scene monitoring and many other fields. The tracking algorithms based on correlation filters have shown good performance in terms of accuracy and tracking speed. However, their performance is not satisfactory in scenes with scale variation, deformation, and occlusion. In this paper, we propose a scene-aware adaptive updating mechanism for visual tracking via a kernel correlation filter (KCF). First, a low complexity scale estimation method is presented, in which the corresponding weight in five scales is employed to determine the final target scale. Then, the adaptive updating mechanism is presented based on the scene-classification. We classify the video scenes as four categories by video content analysis. According to the target scene, we exploit the adaptive updating mechanism to update the kernel correlation filter to improve the robustness of the tracker, especially in scenes with scale variation, deformation, and occlusion. We evaluate our tracker on the CVPR2013 benchmark. The experimental results obtained with the proposed algorithm are improved by 33.3%, 15%, 6%, 21.9% and 19.8% compared to those of the KCF tracker on the scene with scale variation, partial or long-time large-area occlusion, deformation, fast motion and out-of-view. PMID:29140311

  20. STED microscopy visualizes energy deposition of single ions in a solid-state detector beyond diffraction limit

    NASA Astrophysics Data System (ADS)

    Niklas, M.; Henrich, M.; Jäkel, O.; Engelhardt, J.; Abdollahi, A.; Greilich, S.

    2017-05-01

    Fluorescent nuclear track detectors (FNTDs) allow for visualization of single-particle traversal in clinical ion beams. The point spread function of the confocal readout has so far hindered a more detailed characterization of the track spots—the ion’s characteristic signature left in the FNTD. Here we report on the readout of the FNTD by optical nanoscopy, namely stimulated emission depletion microscopy. It was firstly possible to visualize the track spots of carbon ions and protons beyond the diffraction limit of conventional light microscopy with a resolving power of approximately 80 nm (confocal: 320 nm). A clear discrimination of the spatial width, defined by the full width half maximum of track spots from particles (proton and carbon ions), with a linear energy transfer (LET) ranging from approximately 2-1016 keV µm-1 was possible. Results suggest that the width depends on LET but not on particle charge within the uncertainties. A discrimination of particle type by width thus does not seem possible (as well as with confocal microscopy). The increased resolution, however, could allow for refined determination of the cross-sectional area facing substantial energy deposition. This work could pave the way towards development of optical nanoscopy-based analysis of radiation-induced cellular response using cell-fluorescent ion track hybrid detectors.

  1. A time domain inverse dynamic method for the end point tracking control of a flexible manipulator

    NASA Technical Reports Server (NTRS)

    Kwon, Dong-Soo; Book, Wayne J.

    1991-01-01

    The inverse dynamic equation of a flexible manipulator was solved in the time domain. By dividing the inverse system equation into the causal part and the anticausal part, we calculated the torque and the trajectories of all state variables for a given end point trajectory. The interpretation of this method in the frequency domain was explained in detail using the two-sided Laplace transform and the convolution integral. The open loop control of the inverse dynamic method shows an excellent result in simulation. For real applications, a practical control strategy is proposed by adding a feedback tracking control loop to the inverse dynamic feedforward control, and its good experimental performance is presented.

  2. Adaptive integral dynamic surface control of a hypersonic flight vehicle

    NASA Astrophysics Data System (ADS)

    Aslam Butt, Waseem; Yan, Lin; Amezquita S., Kendrick

    2015-07-01

    In this article, non-linear adaptive dynamic surface air speed and flight path angle control designs are presented for the longitudinal dynamics of a flexible hypersonic flight vehicle. The tracking performance of the control design is enhanced by introducing a novel integral term that caters to avoiding a large initial control signal. To ensure feasibility, the design scheme incorporates magnitude and rate constraints on the actuator commands. The uncertain non-linear functions are approximated by an efficient use of the neural networks to reduce the computational load. A detailed stability analysis shows that all closed-loop signals are uniformly ultimately bounded and the ? tracking performance is guaranteed. The robustness of the design scheme is verified through numerical simulations of the flexible flight vehicle model.

  3. Generating visual flickers for eliciting robust steady-state visual evoked potentials at flexible frequencies using monitor refresh rate.

    PubMed

    Nakanishi, Masaki; Wang, Yijun; Wang, Yu-Te; Mitsukura, Yasue; Jung, Tzyy-Ping

    2014-01-01

    In the study of steady-state visual evoked potentials (SSVEPs), it remains a challenge to present visual flickers at flexible frequencies using monitor refresh rate. For example, in an SSVEP-based brain-computer interface (BCI), it is difficult to present a large number of visual flickers simultaneously on a monitor. This study aims to explore whether or how a newly proposed frequency approximation approach changes signal characteristics of SSVEPs. At 10 Hz and 12 Hz, the SSVEPs elicited using two refresh rates (75 Hz and 120 Hz) were measured separately to represent the approximation and constant-period approaches. This study compared amplitude, signal-to-noise ratio (SNR), phase, latency, scalp distribution, and frequency detection accuracy of SSVEPs elicited using the two approaches. To further prove the efficacy of the approximation approach, this study implemented an eight-target BCI using frequencies from 8-15 Hz. The SSVEPs elicited by the two approaches were found comparable with regard to all parameters except amplitude and SNR of SSVEPs at 12 Hz. The BCI obtained an averaged information transfer rate (ITR) of 95.0 bits/min across 10 subjects with a maximum ITR of 120 bits/min on two subjects, the highest ITR reported in the SSVEP-based BCIs. This study clearly showed that the frequency approximation approach can elicit robust SSVEPs at flexible frequencies using monitor refresh rate and thereby can largely facilitate various SSVEP-related studies in neural engineering and visual neuroscience.

  4. Scheduling the future NASA Space Network: Experiences with a flexible scheduling prototype

    NASA Technical Reports Server (NTRS)

    Happell, Nadine; Moe, Karen L.; Minnix, Jay

    1993-01-01

    NASA's Space Network (SN) provides telecommunications and tracking services to low earth orbiting spacecraft. One proposal for improving resource allocation and automating conflict resolution for the SN is the concept of flexible scheduling. In this concept, each Payload Operations Control Center (POCC) will possess a Space Network User POCC Interface (SNUPI) to support the development and management of flexible requests. Flexible requests express the flexibility, constraints, and repetitious nature of the user's communications requirements. Flexible scheduling is expected to improve SN resource utilization and user satisfaction, as well as reduce the effort to produce and maintain a schedule. A prototype testbed has been developed to better understand flexible scheduling as it applies to the SN. This testbed consists of a SNUPI workstation, an SN scheduler, and a flexible request language that conveys information between the two systems. All three are being evaluated by operations personnel. Benchmark testing is being conducted on the scheduler to quantify the productivity improvements achieved with flexible requests.

  5. Flexible Coding of Visual Working Memory Representations during Distraction.

    PubMed

    Lorenc, Elizabeth S; Sreenivasan, Kartik K; Nee, Derek E; Vandenbroucke, Annelinde R E; D'Esposito, Mark

    2018-06-06

    Visual working memory (VWM) recruits a broad network of brain regions, including prefrontal, parietal, and visual cortices. Recent evidence supports a "sensory recruitment" model of VWM, whereby precise visual details are maintained in the same stimulus-selective regions responsible for perception. A key question in evaluating the sensory recruitment model is how VWM representations persist through distracting visual input, given that the early visual areas that putatively represent VWM content are susceptible to interference from visual stimulation.To address this question, we used a functional magnetic resonance imaging inverted encoding model approach to quantitatively assess the effect of distractors on VWM representations in early visual cortex and the intraparietal sulcus (IPS), another region previously implicated in the storage of VWM information. This approach allowed us to reconstruct VWM representations for orientation, both before and after visual interference, and to examine whether oriented distractors systematically biased these representations. In our human participants (both male and female), we found that orientation information was maintained simultaneously in early visual areas and IPS in anticipation of possible distraction, and these representations persisted in the absence of distraction. Importantly, early visual representations were susceptible to interference; VWM orientations reconstructed from visual cortex were significantly biased toward distractors, corresponding to a small attractive bias in behavior. In contrast, IPS representations did not show such a bias. These results provide quantitative insight into the effect of interference on VWM representations, and they suggest a dynamic tradeoff between visual and parietal regions that allows flexible adaptation to task demands in service of VWM. SIGNIFICANCE STATEMENT Despite considerable evidence that stimulus-selective visual regions maintain precise visual information in working memory, it remains unclear how these representations persist through subsequent input. Here, we used quantitative model-based fMRI analyses to reconstruct the contents of working memory and examine the effects of distracting input. Although representations in the early visual areas were systematically biased by distractors, those in the intraparietal sulcus appeared distractor-resistant. In contrast, early visual representations were most reliable in the absence of distraction. These results demonstrate the dynamic, adaptive nature of visual working memory processes, and provide quantitative insight into the ways in which representations can be affected by interference. Further, they suggest that current models of working memory should be revised to incorporate this flexibility. Copyright © 2018 the authors 0270-6474/18/385267-10$15.00/0.

  6. Distributed video data fusion and mining

    NASA Astrophysics Data System (ADS)

    Chang, Edward Y.; Wang, Yuan-Fang; Rodoplu, Volkan

    2004-09-01

    This paper presents an event sensing paradigm for intelligent event-analysis in a wireless, ad hoc, multi-camera, video surveillance system. In particilar, we present statistical methods that we have developed to support three aspects of event sensing: 1) energy-efficient, resource-conserving, and robust sensor data fusion and analysis, 2) intelligent event modeling and recognition, and 3) rapid deployment, dynamic configuration, and continuous operation of the camera networks. We outline our preliminary results, and discuss future directions that research might take.

  7. Simulation of solar array slewing of Indian remote sensing satellite

    NASA Astrophysics Data System (ADS)

    Maharana, P. K.; Goel, P. S.

    The effect of flexible arrays on sun tracking for the IRS satellite is studied. Equations of motion of satellites carrying a rotating flexible appendage are developed following the Newton-Euler approach and utilizing the constrained modes of the appendage. The drive torque, detent torque and friction torque in the SADA are included in the model. Extensive simulations of the slewing motion are carried out. The phenomena of back-stepping, step-missing, step-slipping and the influences of array flexibility in the acquisition mode are observed for certain combinations of parameters.

  8. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860

  9. Reading the Rover Tracks

    NASA Image and Video Library

    2012-08-29

    The straight lines in Curiosity zigzag track marks are Morse code for JPL. The footprint is an important reference mark that the rover can use to drive more precisely via a system called visual odometry.

  10. Adaptive vehicle motion estimation and prediction

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Thorpe, Chuck E.

    1999-01-01

    Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.

  11. Scientific Visualization of Radio Astronomy Data using Gesture Interaction

    NASA Astrophysics Data System (ADS)

    Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.

    2015-09-01

    MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.

  12. Hybrid procedure for total laryngectomy with a flexible robot-assisted surgical system.

    PubMed

    Schuler, Patrick J; Hoffmann, Thomas K; Veit, Johannes A; Rotter, Nicole; Friedrich, Daniel T; Greve, Jens; Scheithauer, Marc O

    2017-06-01

    Total laryngectomy is a standard procedure in head-and-neck surgery for the treatment of cancer patients. Recent clinical experiences have indicated a clinical benefit for patients undergoing transoral robot-assisted total laryngectomy (TORS-TL) with commercially available systems. Here, a new hybrid procedure for total laryngectomy is presented. TORS-TL was performed in human cadavers (n = 3) using a transoral-transcervical hybrid procedure. The transoral approach was performed with a robotic flexible robot-assisted surgical system (Flex®) and compatible flexible instruments. Transoral access and visualization of anatomical landmarks were studied in detail. Total laryngectomy is feasible with a combined transoral-transcervical approach using the flexible robot-assisted surgical system. Transoral visualization of all anatomical structures is sufficient. The flexible design of the robot is advantageous for transoral surgery of the laryngeal structures. Transoral robot assisted surgery has the potential to reduce morbidity, hospital time and fistula rates in a selected group of patients. Initial clinical studies and further development of supplemental tools are in progress. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Autonomous Flight Rules - A Concept for Self-Separation in U.S. Domestic Airspace

    NASA Technical Reports Server (NTRS)

    Wing, David J.; Cotton, William B.

    2011-01-01

    Autonomous Flight Rules (AFR) are proposed as a new set of operating regulations in which aircraft navigate on tracks of their choice while self-separating from traffic and weather. AFR would exist alongside Instrument and Visual Flight Rules (IFR and VFR) as one of three available flight options for any appropriately trained and qualified operator with the necessary certified equipment. Historically, ground-based separation services evolved by necessity as aircraft began operating in the clouds and were unable to see each other. Today, technologies for global navigation, airborne surveillance, and onboard computing enable the functions of traffic conflict management to be fully integrated with navigation procedures onboard the aircraft. By self-separating, aircraft can operate with more flexibility and fewer restrictions than are required when using ground-based separation. The AFR concept is described in detail and provides practical means by which self-separating aircraft could share the same airspace as IFR and VFR aircraft without disrupting the ongoing processes of Air Traffic Control.

  14. The promise of Lean in health care.

    PubMed

    Toussaint, John S; Berry, Leonard L

    2013-01-01

    An urgent need in American health care is improving quality and efficiency while controlling costs. One promising management approach implemented by some leading health care institutions is Lean, a quality improvement philosophy and set of principles originated by the Toyota Motor Company. Health care cases reveal that Lean is as applicable in complex knowledge work as it is in assembly-line manufacturing. When well executed, Lean transforms how an organization works and creates an insatiable quest for improvement. In this article, we define Lean and present 6 principles that constitute the essential dynamic of Lean management: attitude of continuous improvement, value creation, unity of purpose, respect for front-line workers, visual tracking, and flexible regimentation. Health care case studies illustrate each principle. The goal of this article is to provide a template for health care leaders to use in considering the implementation of the Lean management system or in assessing the current state of implementation in their organizations. Copyright © 2013 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  15. Linking Neurons to Network Function and Behavior by Two-Photon Holographic Optogenetics and Volumetric Imaging.

    PubMed

    Dal Maschio, Marco; Donovan, Joseph C; Helmbrecht, Thomas O; Baier, Herwig

    2017-05-17

    We introduce a flexible method for high-resolution interrogation of circuit function, which combines simultaneous 3D two-photon stimulation of multiple targeted neurons, volumetric functional imaging, and quantitative behavioral tracking. This integrated approach was applied to dissect how an ensemble of premotor neurons in the larval zebrafish brain drives a basic motor program, the bending of the tail. We developed an iterative photostimulation strategy to identify minimal subsets of channelrhodopsin (ChR2)-expressing neurons that are sufficient to initiate tail movements. At the same time, the induced network activity was recorded by multiplane GCaMP6 imaging across the brain. From this dataset, we computationally identified activity patterns associated with distinct components of the elicited behavior and characterized the contributions of individual neurons. Using photoactivatable GFP (paGFP), we extended our protocol to visualize single functionally identified neurons and reconstruct their morphologies. Together, this toolkit enables linking behavior to circuit activity with unprecedented resolution. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Visualizing and quantifying movement from pre-recorded videos: The spectral time-lapse (STL) algorithm

    PubMed Central

    Madan, Christopher R

    2014-01-01

    When studying animal behaviour within an open environment, movement-related data are often important for behavioural analyses. Therefore, simple and efficient techniques are needed to present and analyze the data of such movements. However, it is challenging to present both spatial and temporal information of movements within a two-dimensional image representation. To address this challenge, we developed the spectral time-lapse (STL) algorithm that re-codes an animal’s position at every time point with a time-specific color, and overlays it with a reference frame of the video, to produce a summary image. We additionally incorporated automated motion tracking, such that the animal’s position can be extracted and summary statistics such as path length and duration can be calculated, as well as instantaneous velocity and acceleration. Here we describe the STL algorithm and offer a freely available MATLAB toolbox that implements the algorithm and allows for a large degree of end-user control and flexibility. PMID:25580219

  17. Measuring advertising effectiveness in Travel 2.0 websites through eye-tracking technology.

    PubMed

    Muñoz-Leiva, Francisco; Hernández-Méndez, Janet; Gómez-Carmona, Diego

    2018-03-06

    The advent of Web 2.0 is changing tourists' behaviors, prompting them to take on a more active role in preparing their travel plans. It is also leading tourism companies to have to adapt their marketing strategies to different online social media. The present study analyzes advertising effectiveness in social media in terms of customers' visual attention and self-reported memory (recall). Data were collected through a within-subjects and between-groups design based on eye-tracking technology, followed by a self-administered questionnaire. Participants were instructed to visit three Travel 2.0 websites (T2W), including a hotel's blog, social network profile (Facebook), and virtual community profile (Tripadvisor). Overall, the results revealed greater advertising effectiveness in the case of the hotel social network; and visual attention measures based on eye-tracking data differed from measures of self-reported recall. Visual attention to the ad banner was paid at a low level of awareness, which explains why the associations with the ad did not activate its subsequent recall. The paper offers a pioneering attempt in the application of eye-tracking technology, and examines the possible impact of visual marketing stimuli on user T2W-related behavior. The practical implications identified in this research, along with its limitations and future research opportunities, are of interest both for further theoretical development and practical application. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Infrared dim and small target detecting and tracking method inspired by Human Visual System

    NASA Astrophysics Data System (ADS)

    Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian

    2014-01-01

    Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.

  19. The artificial retina for track reconstruction at the LHC crossing rate

    NASA Astrophysics Data System (ADS)

    Abba, A.; Bedeschi, F.; Citterio, M.; Caponio, F.; Cusimano, A.; Geraci, A.; Marino, P.; Morello, M. J.; Neri, N.; Punzi, G.; Piucci, A.; Ristori, L.; Spinella, F.; Stracka, S.; Tonelli, D.

    2016-04-01

    We present the results of an R&D study for a specialized processor capable of precisely reconstructing events with hundreds of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus suitable for processing LHC events at the full crossing frequency. For this purpose we design and test a massively parallel pattern-recognition algorithm, inspired to the current understanding of the mechanisms adopted by the primary visual cortex of mammals in the early stages of visual-information processing. The detailed geometry and charged-particle's activity of a large tracking detector are simulated and used to assess the performance of the artificial retina algorithm. We find that high-quality tracking in large detectors is possible with sub-microsecond latencies when the algorithm is implemented in modern, high-speed, high-bandwidth FPGA devices.

  20. Extracting, Tracking, and Visualizing Magnetic Flux Vortices in 3D Complex-Valued Superconductor Simulation Data.

    PubMed

    Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas

    2016-01-01

    We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.

  1. Preliminary study on magnetic tracking-based planar shape sensing and navigation for flexible surgical robots in transoral surgery: methods and phantom experiments.

    PubMed

    Song, Shuang; Zhang, Changchun; Liu, Li; Meng, Max Q-H

    2018-02-01

    Flexible surgical robot can work in confined and complex environments, which makes it a good option for minimally invasive surgery. In order to utilize flexible manipulators in complicated and constrained surgical environments, it is of great significance to monitor the position and shape of the curvilinear manipulator in real time during the procedures. In this paper, we propose a magnetic tracking-based planar shape sensing and navigation system for flexible surgical robots in the transoral surgery. The system can provide the real-time tip position and shape information of the robot during the operation. We use wire-driven flexible robot to serve as the manipulator. It has three degrees of freedom. A permanent magnet is mounted at the distal end of the robot. Its magnetic field can be sensed with a magnetic sensor array. Therefore, position and orientation of the tip can be estimated utilizing a tracking method. A shape sensing algorithm is then carried out to estimate the real-time shape based on the tip pose. With the tip pose and shape display in the 3D reconstructed CT model, navigation can be achieved. Using the proposed system, we carried out planar navigation experiments on a skull phantom to touch three different target positions under the navigation of the skull display interface. During the experiments, the real-time shape has been well monitored and distance errors between the robot tip and the targets in the skull have been recorded. The mean navigation error is [Formula: see text] mm, while the maximum error is 3.2 mm. The proposed method provides the advantages that no sensors are needed to mount on the robot and no line-of-sight problem. Experimental results verified the feasibility of the proposed method.

  2. Mars @ ASDC

    NASA Astrophysics Data System (ADS)

    Carraro, Francesco

    "Mars @ ASDC" is a project born with the goal of using the new web technologies to assist researches involved in the study of Mars. This project employs Mars map and javascript APIs provided by Google to visualize data acquired by space missions on the planet. So far, visualization of tracks acquired by MARSIS and regions observed by VIRTIS-Rosetta has been implemented. The main reason for the creation of this kind of tool is the difficulty in handling hundreds or thousands of acquisitions, like the ones from MARSIS, and the consequent difficulty in finding observations related to a particular region. This led to the development of a tool which allows to search for acquisitions either by defining the region of interest through a set of geometrical parameters or by manually selecting the region on the map through a few mouse clicks The system allows the visualization of tracks (acquired by MARSIS) or regions (acquired by VIRTIS-Rosetta) which intersect the user defined region. MARSIS tracks can be visualized both in Mercator and polar projections while the regions observed by VIRTIS can presently be visualized only in Mercator projection. The Mercator projection is the standard map provided by Google. The polar projections are provided by NASA and have been developed to be used in combination with APIs provided by Google The whole project has been developed following the "open source" philosophy: the client-side code which handles the functioning of the web page is written in javascript; the server-side code which executes the searches for tracks or regions is written in PHP and the DB which undergoes the system is MySQL.

  3. Dual-Tasking Alleviated Sleep Deprivation Disruption in Visuomotor Tracking: An fMRI Study

    ERIC Educational Resources Information Center

    Gazes, Yunglin; Rakitin, Brian C.; Steffener, Jason; Habeck, Christian; Butterfield, Brady; Basner, Robert C.; Ghez, Claude; Stern, Yaakov

    2012-01-01

    Effects of dual-responding on tracking performance after 49-h of sleep deprivation (SD) were evaluated behaviorally and with functional magnetic resonance imaging (fMRI). Continuous visuomotor tracking was performed simultaneously with an intermittent color-matching visual detection task in which a pair of color-matched stimuli constituted a…

  4. Prior Knowledge and Online Inquiry-Based Science Reading: Evidence from Eye Tracking

    ERIC Educational Resources Information Center

    Ho, Hsin Ning Jessie; Tsai, Meng-Jung; Wang, Ching-Yeh; Tsai, Chin-Chung

    2014-01-01

    This study employed eye-tracking technology to examine how students with different levels of prior knowledge process text and data diagrams when reading a web-based scientific report. Students' visual behaviors were tracked and recorded when they read a report demonstrating the relationship between the greenhouse effect and global climate…

  5. Use of Cognitive and Metacognitive Strategies in Online Search: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Zhou, Mingming; Ren, Jing

    2016-01-01

    This study used eye-tracking technology to track students' eye movements while searching information on the web. The research question guiding this study was "Do students with different search performance levels have different visual attention distributions while searching information online? If yes, what are the patterns for high and low…

  6. High-performance object tracking and fixation with an online neural estimator.

    PubMed

    Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian

    2007-02-01

    Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.

  7. Shared processing in multiple object tracking and visual working memory in the absence of response order and task order confounds

    PubMed Central

    Howe, Piers D. L.

    2017-01-01

    To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources. PMID:28410383

  8. Shared processing in multiple object tracking and visual working memory in the absence of response order and task order confounds.

    PubMed

    Lapierre, Mark D; Cropper, Simon J; Howe, Piers D L

    2017-01-01

    To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources.

  9. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  10. Investigation of the effects of sleeper-passing impacts on the high-speed train

    NASA Astrophysics Data System (ADS)

    Wu, Xingwen; Cai, Wubin; Chi, Maoru; Wei, Lai; Shi, Huailong; Zhu, Minhao

    2015-12-01

    The sleeper-passing impact has always been considered negligible in normal conditions, while the experimental data obtained from a High-speed train in a cold weather expressed significant sleeper-passing impacts on the axle box, bogie frame and car body. Therefore, in this study, a vertical coupled vehicle/track dynamic model was developed to investigate the sleeper-passing impacts and its effects on the dynamic performance of the high-speed train. In the model, the dynamic model of vehicle is established with 10 degrees of freedom. The track model is formulated with two rails supported on the discrete supports through the finite element method. The contact forces between the wheel and rail are estimated using the non-linear Hertz contact theory. The parametric studies are conducted to analyse effects of both the vehicle speeds and the discrete support stiffness on the sleeper-passing impacts. The results show that the sleeper-passing impacts become extremely significant with the increased support stiffness of track, especially when the frequencies of sleeper-passing impacts approach to the resonance frequencies of wheel/track system. The damping of primary suspension can effectively lower the magnitude of impacts in the resonance speed ranges, but has little effect on other speed ranges. Finally, a more comprehensively coupled vehicle/track dynamic model integrating with a flexible wheel set is developed to discuss the sleeper-passing-induced flexible vibration of wheel set.

  11. Oculometric Assessment of Dynamic Visual Processing

    NASA Technical Reports Server (NTRS)

    Liston, Dorion Bryce; Stone, Lee

    2014-01-01

    Eye movements are the most frequent (3 per second), shortest-latency (150-250 ms), and biomechanically simplest (1 joint, no inertial complexities) voluntary motor behavior in primates, providing a model system to assess sensorimotor disturbances arising from trauma, fatigue, aging, or disease states (e.g., Diefendorf and Dodge, 1908). We developed a 15-minute behavioral tracking protocol consisting of randomized stepramp radial target motion to assess several aspects of the behavioral response to dynamic visual motion, including pursuit initiation, steadystate tracking, direction-tuning, and speed-tuning thresholds. This set of oculomotor metrics provide valid and reliable measures of dynamic visual performance (Stone and Krauzlis, 2003; Krukowski and Stone, 2005; Stone et al, 2009; Liston and Stone, 2014), and may prove to be a useful assessment tool for functional impairments of dynamic visual processing.

  12. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    PubMed Central

    Qin, Lei; Snoussi, Hichem; Abdallah, Fahed

    2014-01-01

    We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences. PMID:24865883

  13. Visible propagation from invisible exogenous cueing.

    PubMed

    Lin, Zhicheng; Murray, Scott O

    2013-09-20

    Perception and performance is affected not just by what we see but also by what we do not see-inputs that escape our awareness. While conscious processing and unconscious processing have been assumed to be separate and independent, here we report the propagation of unconscious exogenous cueing as determined by conscious motion perception. In a paradigm combining masked exogenous cueing and apparent motion, we show that, when an onset cue was rendered invisible, the unconscious exogenous cueing effect traveled, manifesting at uncued locations (4° apart) in accordance with conscious perception of visual motion; the effect diminished when the cue-to-target distance was 8° apart. In contrast, conscious exogenous cueing manifested in both distances. Further evidence reveals that the unconscious and conscious nonretinotopic effects could not be explained by an attentional gradient, nor by bottom-up, energy-based motion mechanisms, but rather they were subserved by top-down, tracking-based motion mechanisms. We thus term these effects mobile cueing. Taken together, unconscious mobile cueing effects (a) demonstrate a previously unknown degree of flexibility of unconscious exogenous attention; (b) embody a simultaneous dissociation and association of attention and consciousness, in which exogenous attention can occur without cue awareness ("dissociation"), yet at the same time its effect is contingent on conscious motion tracking ("association"); and (c) underscore the interaction of conscious and unconscious processing, providing evidence for an unconscious effect that is not automatic but controlled.

  14. Robotically assisted ureteroscopy for kidney exploration.

    PubMed

    Talari, Hadi F; Monfaredi, Reza; Wilson, Emmanuel; Blum, Emily; Bayne, Christopher; Peters, Craig; Zhang, Anlin; Cleary, Kevin

    2017-02-01

    Ureteroscopy is a minimally invasive procedure for diagnosis and treatment of a wide range of urinary tract pathologies. It is most commonly performed in the diagnostic work-up of hematuria and the diagnosis and treatment of upper urinary tract malignancies and calculi. Ergonomic and visualization challenges as well as radiation exposure are limitations to conventional ureteroscopy. For example, for diagnostic tumor inspection, the urologist has to maneuver the ureteroscope through each of the 6 to 12 calyces in the kidney under fluoroscopy to ensure complete surveillance. Therefore, we have been developing a robotic system to "power drive" a flexible fiber-optic ureteroscope with 3D tip tracking and pre-operative image overlay. Our goal is to provide the urologist precise control of the ureteroscope tip with less radiation exposure. Our prototype system allows control of the three degrees of freedom of the ureteroscope via brushless motors and a joystick interface. The robot provides a steady platform for controlling the ureteroscope. Furthermore, the robot design facilitates a quick "snap-in" of the ureteroscope, thus allowing the ureteroscope to be mounted midway through the procedure. We have completed the mechanical system and the controlling software and begun evaluation using a kidney phantom. We put MRI-compatible fiducials on the phantom and obtained MR images. We registered these images with the robot using an electromagnetic tracking system and paired-point registration. The system is described and initial evaluation results are given in this paper.

  15. NeuroManager: a workflow analysis based simulation management engine for computational neuroscience

    PubMed Central

    Stockton, David B.; Santamaria, Fidel

    2015-01-01

    We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project. PMID:26528175

  16. NeuroManager: a workflow analysis based simulation management engine for computational neuroscience.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2015-01-01

    We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project.

  17. Tracking single particle rotation: Probing dynamics in four dimensions

    DOE PAGES

    Anthony, Stephen Michael; Yu, Yan

    2015-04-29

    Direct visualization and tracking of small particles at high spatial and temporal resolution provides a powerful approach to probing complex dynamics and interactions in chemical and biological processes. Analysis of the rotational dynamics of particles adds a new dimension of information that is otherwise impossible to obtain with conventional 3-D particle tracking. In this review, we survey recent advances in single-particle rotational tracking, with highlights on the rotational tracking of optically anisotropic Janus particles. Furthermore, strengths and weaknesses of the various particle tracking methods, and their applications are discussed.

  18. Visualization of frequency-modulated electric field based on photonic frequency tracking in asynchronous electro-optic measurement system

    NASA Astrophysics Data System (ADS)

    Hisatake, Shintaro; Yamaguchi, Koki; Uchida, Hirohisa; Tojyo, Makoto; Oikawa, Yoichi; Miyaji, Kunio; Nagatsuma, Tadao

    2018-04-01

    We propose a new asynchronous measurement system to visualize the amplitude and phase distribution of a frequency-modulated electromagnetic wave. The system consists of three parts: a nonpolarimetric electro-optic frequency down-conversion part, a phase-noise-canceling part, and a frequency-tracking part. The photonic local oscillator signal generated by electro-optic phase modulation is controlled to track the frequency of the radio frequency (RF) signal to significantly enhance the measurable RF bandwidth. We demonstrate amplitude and phase measurement of a quasi-millimeter-wave frequency-modulated continuous-wave signal (24 GHz ± 80 MHz with a 2.5 ms period) as a proof-of-concept experiment.

  19. Map display design

    NASA Technical Reports Server (NTRS)

    Aretz, Anthony J.

    1990-01-01

    This paper presents a cognitive model of a pilot's navigation task and describes an experiment comparing a visual momentum map display to the traditional track-up and north-up approaches. The data show the advantage to a track-up map is its congruence with the ego-centered forward view; however, the development of survey knowledge is hindered by the inconsistency of the rotating display. The stable alignment of a north-up map aids the acquisition of survey knowledge, but there is a cost associated with the mental rotation of the display to a track-up alignment for ego-centered tasks. The results also show that visual momentum can be used to reduce the mental rotation costs of a north-up display.

  20. Robot tracking system improvements and visual calibration of orbiter position for radiator inspection

    NASA Technical Reports Server (NTRS)

    Tonkay, Gregory

    1990-01-01

    The following separate topics are addressed: (1) improving a robotic tracking system; and (2) providing insights into orbiter position calibration for radiator inspection. The objective of the tracking system project was to provide the capability to track moving targets more accurately by adjusting parameters in the control system and implementing a predictive algorithm. A computer model was developed to emulate the tracking system. Using this model as a test bed, a self-tuning algorithm was developed to tune the system gains. The model yielded important findings concerning factors that affect the gains. The self-tuning algorithms will provide the concepts to write a program to automatically tune the gains in the real system. The section concerning orbiter position calibration provides a comparison to previous work that had been performed for plant growth. It provided the conceptualized routines required to visually determine the orbiter position and orientation. Furthermore, it identified the types of information which are required to flow between the robot controller and the vision system.

  1. Tracking change over time: River flooding

    USGS Publications Warehouse

    ,

    2014-01-01

    The objective of the Tracking Change Over Time lesson plan is to get students excited about studying the changing Earth. Intended for students in grades 5-8, the lesson plan is flexible and may be used as a student self-guided tutorial or as a teacher-led class lesson. Enhance students' learning of geography, map reading, earth science, and problem solving by seeing landscape changes from space.

  2. Wearable and flexible electronics for continuous molecular monitoring.

    PubMed

    Yang, Yiran; Gao, Wei

    2018-04-03

    Wearable biosensors have received tremendous attention over the past decade owing to their great potential in predictive analytics and treatment toward personalized medicine. Flexible electronics could serve as an ideal platform for personalized wearable devices because of their unique properties such as light weight, low cost, high flexibility and great conformability. Unlike most reported flexible sensors that mainly track physical activities and vital signs, the new generation of wearable and flexible chemical sensors enables real-time, continuous and fast detection of accessible biomarkers from the human body, and allows for the collection of large-scale information about the individual's dynamic health status at the molecular level. In this article, we review and highlight recent advances in wearable and flexible sensors toward continuous and non-invasive molecular analysis in sweat, tears, saliva, interstitial fluid, blood, wound exudate as well as exhaled breath. The flexible platforms, sensing mechanisms, and device and system configurations employed for continuous monitoring are summarized. We also discuss the key challenges and opportunities of the wearable and flexible chemical sensors that lie ahead.

  3. Nonlinear dynamic model for visual object tracking on Grassmann manifolds with partial occlusion handling.

    PubMed

    Khan, Zulfiqar Hasan; Gu, Irene Yu-Hua

    2013-12-01

    This paper proposes a novel Bayesian online learning and tracking scheme for video objects on Grassmann manifolds. Although manifold visual object tracking is promising, large and fast nonplanar (or out-of-plane) pose changes and long-term partial occlusions of deformable objects in video remain a challenge that limits the tracking performance. The proposed method tackles these problems with the main novelties on: 1) online estimation of object appearances on Grassmann manifolds; 2) optimal criterion-based occlusion handling for online updating of object appearances; 3) a nonlinear dynamic model for both the appearance basis matrix and its velocity; and 4) Bayesian formulations, separately for the tracking process and the online learning process, that are realized by employing two particle filters: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in an alternating fashion to mitigate the tracking drift. Experiments using the proposed tracker on videos captured by a single dynamic/static camera have shown robust tracking performance, particularly for scenarios when target objects contain significant nonplanar pose changes and long-term partial occlusions. Comparisons with eight existing state-of-the-art/most relevant manifold/nonmanifold trackers with evaluations have provided further support to the proposed scheme.

  4. Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study

    ERIC Educational Resources Information Center

    Bulf, Hermann; Valenza, Eloisa

    2013-01-01

    Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…

  5. A Unique Testing System for Audio Visual Foreign Language Laboratory.

    ERIC Educational Resources Information Center

    Stama, Spelios T.

    1980-01-01

    Described is the design of a low maintenance, foreign language laboratory at Ithaca College, New York, that provides visual and audio instruction, flexibility for testing, and greater student involvement in the lessons. (Author/CS)

  6. Visual Environment for Rich Data Interpretation (VERDI) program for environmental modeling systems

    EPA Pesticide Factsheets

    VERDI is a flexible, modular, Java-based program used for visualizing multivariate gridded meteorology, emissions and air quality modeling data created by environmental modeling systems such as the CMAQ model and WRF.

  7. 78 FR 16051 - Vehicle/Track Interaction Safety Standards; High-Speed and High Cant Deficiency Operations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-13

    ...FRA is amending the Track Safety Standards and Passenger Equipment Safety Standards to promote the safe interaction of rail vehicles with the track over which they operate under a variety of conditions at speeds up to 220 m.p.h. The final rule revises standards for track geometry and safety limits for vehicle response to track conditions, enhances vehicle/track qualification procedures, and adds flexibility for permitting high cant deficiency train operations through curves at conventional speeds. The rule accounts for a range of vehicle types that are currently in operation, as well as vehicle types that may likely be used in future high-speed or high cant deficiency rail operations, or both. The rule is based on the results of simulation studies designed to identify track geometry irregularities associated with unsafe wheel/rail forces and accelerations, thorough reviews of vehicle qualification and revenue service test data, and consideration of international practices.

  8. Shifting Visual Perspective During Retrieval Shapes Autobiographical Memories

    PubMed Central

    St Jacques, Peggy L.; Szpunar, Karl K.; Schacter, Daniel L.

    2016-01-01

    The dynamic and flexible nature of memories is evident in our ability to adopt multiple visual perspectives. Although autobiographical memories are typically encoded from the visual perspective of our own eyes they can be retrieved from the perspective of an observer looking at our self. Here, we examined the neural mechanisms of shifting visual perspective during long-term memory retrieval and its influence on online and subsequent memories using functional magnetic resonance imaging (fMRI). Participants generated specific autobiographical memories from the last five years and rated their visual perspective. In a separate fMRI session, they were asked to retrieve the memories across three repetitions while maintaining the same visual perspective as their initial rating or by shifting to an alternative perspective. Visual perspective shifting during autobiographical memory retrieval was supported by a linear decrease in neural recruitment across repetitions in the posterior parietal cortices. Additional analyses revealed that the precuneus, in particular, contributed to both online and subsequent changes in the phenomenology of memories. Our findings show that flexibly shifting egocentric perspective during autobiographical memory retrieval is supported by the precuneus, and suggest that this manipulation of mental imagery during retrieval has consequences for how memories are retrieved and later remembered. PMID:27989780

  9. Flexible robotics with electromagnetic tracking improves safety and efficiency during in vitro endovascular navigation.

    PubMed

    Schwein, Adeline; Kramer, Ben; Chinnadurai, Ponraj; Walker, Sean; O'Malley, Marcia; Lumsden, Alan; Bismuth, Jean

    2017-02-01

    One limitation of the use of robotic catheters is the lack of real-time three-dimensional (3D) localization and position updating: they are still navigated based on two-dimensional (2D) X-ray fluoroscopic projection images. Our goal was to evaluate whether incorporating an electromagnetic (EM) sensor on a robotic catheter tip could improve endovascular navigation. Six users were tasked to navigate using a robotic catheter with incorporated EM sensors in an aortic aneurysm phantom. All users cannulated two anatomic targets (left renal artery and posterior "gate") using four visualization modes: (1) standard fluoroscopy mode (control), (2) 2D fluoroscopy mode showing real-time virtual catheter orientation from EM tracking, (3) 3D model of the phantom with anteroposterior and endoluminal view, and (4) 3D model with anteroposterior and lateral view. Standard X-ray fluoroscopy was always available. Cannulation and fluoroscopy times were noted for every mode. 3D positions of the EM tip sensor were recorded at 4 Hz to establish kinematic metrics. The EM sensor-incorporated catheter navigated as expected according to all users. The success rate for cannulation was 100%. For the posterior gate target, mean cannulation times in minutes:seconds were 8:12, 4:19, 4:29, and 3:09, respectively, for modes 1, 2, 3 and 4 (P = .013), and mean fluoroscopy times were 274, 20, 29, and 2 seconds, respectively (P = .001). 3D path lengths, spectral arc length, root mean dimensionless jerk, and number of submovements were significantly improved when EM tracking was used (P < .05), showing higher quality of catheter movement with EM navigation. The EM tracked robotic catheter allowed better real-time 3D orientation, facilitating navigation, with a reduction in cannulation and fluoroscopy times and improvement of motion consistency and efficiency. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  10. Using Eye Tracking to Explore Consumers' Visual Behavior According to Their Shopping Motivation in Mobile Environments.

    PubMed

    Hwang, Yoon Min; Lee, Kun Chang

    2017-07-01

    Despite a strong shift to mobile shopping trends, many in-depth questions about mobile shoppers' visual behaviors in mobile shopping environments remain unaddressed. This study aims to answer two challenging research questions (RQs): (a) how much does shopping motivation like goal orientation and recreation influence mobile shoppers' visual behavior toward displays of shopping information on a mobile shopping screen and (b) how much of mobile shoppers' visual behavior influences their purchase intention for the products displayed on a mobile shopping screen? An eye-tracking approach is adopted to answer the RQs empirically. The experimental results showed that goal-oriented shoppers paid closer attention to products' information areas to meet their shopping goals. Their purchase intention was positively influenced by their visual attention to the two areas of interest such as product information and consumer opinions. In contrast, recreational shoppers tended to visually fixate on the promotion area, which positively influences their purchase intention. The results contribute to understanding mobile shoppers' visual behaviors and shopping intentions from the perspective of mindset theory.

  11. Your Child's Vision

    MedlinePlus

    ... 3½, kids should have eye health screenings and visual acuity tests (tests that measure sharpness of vision) ... eye rubbing extreme light sensitivity poor focusing poor visual tracking (following an object) abnormal alignment or movement ...

  12. Global Positioning System Synchronized Active Light Autonomous Docking System

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor); Bell, Joseph L. (Inventor)

    1996-01-01

    A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprising at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.

  13. Visual Tracking Based on Extreme Learning Machine and Sparse Representation

    PubMed Central

    Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen

    2015-01-01

    The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359

  14. Global Positioning System Synchronized Active Light Autonomous Docking System

    NASA Technical Reports Server (NTRS)

    Howard, Richard (Inventor)

    1994-01-01

    A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprises at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.

  15. Real-time tracking of liver motion and deformation using a flexible needle

    PubMed Central

    Lei, Peng; Moeslein, Fred; Wood, Bradford J.

    2012-01-01

    Purpose A real-time 3D image guidance system is needed to facilitate treatment of liver masses using radiofrequency ablation, for example. This study investigates the feasibility and accuracy of using an electromagnetically tracked flexible needle inserted into the liver to track liver motion and deformation. Methods This proof-of-principle study was conducted both ex vivo and in vivo with a CT scanner taking the place of an electromagnetic tracking system as the spatial tracker. Deformations of excised livers were artificially created by altering the shape of the stage on which the excised livers rested. Free breathing or controlled ventilation created deformations of live swine livers. The positions of the needle and test targets were determined through CT scans. The shape of the needle was reconstructed using data simulating multiple embedded electromagnetic sensors. Displacement of liver tissues in the vicinity of the needle was derived from the change in the reconstructed shape of the needle. Results The needle shape was successfully reconstructed with tracking information of two on-needle points. Within 30 mm of the needle, the registration error of implanted test targets was 2.4 ± 1.0 mm ex vivo and 2.8 ± 1.5 mm in vivo. Conclusion A practical approach was developed to measure the motion and deformation of the liver in real time within a region of interest. The approach relies on redesigning the often-used seeker needle to include embedded electromagnetic tracking sensors. With the nonrigid motion and deformation information of the tracked needle, a single- or multimodality 3D image of the intraprocedural liver, now clinically obtained with some delay, can be updated continuously to monitor intraprocedural changes in hepatic anatomy. This capability may be useful in radiofrequency ablation and other percutaneous ablative procedures. PMID:20700662

  16. Robust visual tracking based on deep convolutional neural networks and kernelized correlation filters

    NASA Astrophysics Data System (ADS)

    Yang, Hua; Zhong, Donghong; Liu, Chenyi; Song, Kaiyou; Yin, Zhouping

    2018-03-01

    Object tracking is still a challenging problem in computer vision, as it entails learning an effective model to account for appearance changes caused by occlusion, out of view, plane rotation, scale change, and background clutter. This paper proposes a robust visual tracking algorithm called deep convolutional neural network (DCNNCT) to simultaneously address these challenges. The proposed DCNNCT algorithm utilizes a DCNN to extract the image feature of a tracked target, and the full range of information regarding each convolutional layer is used to express the image feature. Subsequently, the kernelized correlation filters (CF) in each convolutional layer are adaptively learned, the correlation response maps of that are combined to estimate the location of the tracked target. To avoid the case of tracking failure, an online random ferns classifier is employed to redetect the tracked target, and a dual-threshold scheme is used to obtain the final target location by comparing the tracking result with the detection result. Finally, the change in scale of the target is determined by building scale pyramids and training a CF. Extensive experiments demonstrate that the proposed algorithm is effective at tracking, especially when evaluated using an index called the overlap rate. The DCNNCT algorithm is also highly competitive in terms of robustness with respect to state-of-the-art trackers in various challenging scenarios.

  17. Visual attention on a respiratory function monitor during simulated neonatal resuscitation: an eye-tracking study.

    PubMed

    Katz, Trixie A; Weinberg, Danielle D; Fishman, Claire E; Nadkarni, Vinay; Tremoulet, Patrice; Te Pas, Arjan B; Sarcevic, Aleksandra; Foglia, Elizabeth E

    2018-06-14

    A respiratory function monitor (RFM) may improve positive pressure ventilation (PPV) technique, but many providers do not use RFM data appropriately during delivery room resuscitation. We sought to use eye-tracking technology to identify RFM parameters that neonatal providers view most commonly during simulated PPV. Mixed methods study. Neonatal providers performed RFM-guided PPV on a neonatal manikin while wearing eye-tracking glasses to quantify visual attention on displayed RFM parameters (ie, exhaled tidal volume, flow, leak). Participants subsequently provided qualitative feedback on the eye-tracking glasses. Level 3 academic neonatal intensive care unit. Twenty neonatal resuscitation providers. Visual attention: overall gaze sample percentage; total gaze duration, visit count and average visit duration for each displayed RFM parameter. Qualitative feedback: willingness to wear eye-tracking glasses during clinical resuscitation. Twenty providers participated in this study. The mean gaze sample captured wa s 93% (SD 4%). Exhaled tidal volume waveform was the RFM parameter with the highest total gaze duration (median 23%, IQR 13-51%), highest visit count (median 5.17 per 10 s, IQR 2.82-6.16) and longest visit duration (median 0.48 s, IQR 0.38-0.81 s). All participants were willing to wear the glasses during clinical resuscitation. Wearable eye-tracking technology is feasible to identify gaze fixation on the RFM display and is well accepted by providers. Neonatal providers look at exhaled tidal volume more than any other RFM parameter. Future applications of eye-tracking technology include use during clinical resuscitation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Automatic Orientation of Large Blocks of Oblique Images

    NASA Astrophysics Data System (ADS)

    Rupnik, E.; Nex, F.; Remondino, F.

    2013-05-01

    Nowadays, multi-camera platforms combining nadir and oblique cameras are experiencing a revival. Due to their advantages such as ease of interpretation, completeness through mitigation of occluding areas, as well as system accessibility, they have found their place in numerous civil applications. However, automatic post-processing of such imagery still remains a topic of research. Configuration of cameras poses a challenge on the traditional photogrammetric pipeline used in commercial software and manual measurements are inevitable. For large image blocks it is certainly an impediment. Within theoretical part of the work we review three common least square adjustment methods and recap on possible ways for a multi-camera system orientation. In the practical part we present an approach that successfully oriented a block of 550 images acquired with an imaging system composed of 5 cameras (Canon Eos 1D Mark III) with different focal lengths. Oblique cameras are rotated in the four looking directions (forward, backward, left and right) by 45° with respect to the nadir camera. The workflow relies only upon open-source software: a developed tool to analyse image connectivity and Apero to orient the image block. The benefits of the connectivity tool are twofold: in terms of computational time and success of Bundle Block Adjustment. It exploits the georeferenced information provided by the Applanix system in constraining feature point extraction to relevant images only, and guides the concatenation of images during the relative orientation. Ultimately an absolute transformation is performed resulting in mean re-projection residuals equal to 0.6 pix.

  19. c-Mantic: A Cytoscape plugin for Semantic Web

    EPA Science Inventory

    Semantic Web tools can streamline the process of storing, analyzing and sharing biological information. Visualization is important for communicating such complex biological relationships. Here we use the flexibility and speed of the Cytoscape platform to interactively visualize s...

  20. Transformations in the Recognition of Visual Forms

    ERIC Educational Resources Information Center

    Charness, Neil; Bregman, Albert S.

    1973-01-01

    In a study which required college students to learn to recognize four flexible plastic shapes photographed on different backgrounds from different angles, the importance of a context-rich environment for the learning and recognition of visual patterns was illustrated. (Author)

Top