Sample records for active vision systems

  1. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  2. Pyramidal neurovision architecture for vision machines

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1993-08-01

    The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.

  3. GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa

    2004-01-01

    The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.

  4. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    PubMed Central

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-01

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144

  5. Machine Vision Giving Eyes to Robots. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  6. Vision Systems with the Human in the Loop

    NASA Astrophysics Data System (ADS)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  7. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  8. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  9. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking

    PubMed Central

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku

    2017-01-01

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483

  10. Technical Challenges in the Development of a NASA Synthetic Vision System Concept

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Parrish, Russell V.; Kramer, Lynda J.; Harrah, Steve; Arthur, J. J., III

    2002-01-01

    Within NASA's Aviation Safety Program, the Synthetic Vision Systems Project is developing display system concepts to improve pilot terrain/situation awareness by providing a perspective synthetic view of the outside world through an on-board database driven by precise aircraft positioning information updating via Global Positioning System-based data. This work is aimed at eliminating visibility-induced errors and low visibility conditions as a causal factor to civil aircraft accidents, as well as replicating the operational benefits of clear day flight operations regardless of the actual outside visibility condition. Synthetic vision research and development activities at NASA Langley Research Center are focused around a series of ground simulation and flight test experiments designed to evaluate, investigate, and assess the technology which can lead to operational and certified synthetic vision systems. The technical challenges that have been encountered and that are anticipated in this research and development activity are summarized.

  11. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  12. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  13. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  14. Robotic space simulation integration of vision algorithms into an orbital operations simulation

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.

    1987-01-01

    In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.

  15. The influence of active vision on the exoskeleton of intelligent agents

    NASA Astrophysics Data System (ADS)

    Smith, Patrice; Terry, Theodore B.

    2016-04-01

    Chameleonization occurs when a self-learning autonomous mobile system's (SLAMR) active vision scans the surface of which it is perched causing the exoskeleton to changes colors exhibiting a chameleon effect. Intelligent agents having the ability to adapt to their environment and exhibit key survivability characteristics of its environments would largely be due in part to the use of active vision. Active vision would allow the intelligent agent to scan its environment and adapt as needed in order to avoid detection. The SLAMR system would have an exoskeleton, which would change, based on the surface it was perched on; this is known as the "chameleon effect." Not in the common sense of the term, but from the techno-bio inspired meaning as addressed in our previous paper. Active vision, utilizing stereoscopic color sensing functionality would enable the intelligent agent to scan an object within its close proximity, determine the color scheme, and match it; allowing the agent to blend with its environment. Through the use of its' optical capabilities, the SLAMR system would be able to further determine its position, taking into account spatial and temporal correlation and spatial frequency content of neighboring structures further ensuring successful background blending. The complex visual tasks of identifying objects, using edge detection, image filtering, and feature extraction are essential for an intelligent agent to gain additional knowledge about its environmental surroundings.

  16. Vision

    NASA Technical Reports Server (NTRS)

    Taylor, J. H.

    1973-01-01

    Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.

  17. The Adaptive Optics Summer School Laboratory Activities

    NASA Astrophysics Data System (ADS)

    Ammons, S. M.; Severson, S.; Armstrong, J. D.; Crossfield, I.; Do, T.; Fitzgerald, M.; Harrington, D.; Hickenbotham, A.; Hunter, J.; Johnson, J.; Johnson, L.; Li, K.; Lu, J.; Maness, H.; Morzinski, K.; Norton, A.; Putnam, N.; Roorda, A.; Rossi, E.; Yelda, S.

    2010-12-01

    Adaptive Optics (AO) is a new and rapidly expanding field of instrumentation, yet astronomers, vision scientists, and general AO practitioners are largely unfamiliar with the root technologies crucial to AO systems. The AO Summer School (AOSS), sponsored by the Center for Adaptive Optics, is a week-long course for training graduate students and postdoctoral researchers in the underlying theory, design, and use of AO systems. AOSS participants include astronomers who expect to utilize AO data, vision scientists who will use AO instruments to conduct research, opticians and engineers who design AO systems, and users of high-bandwidth laser communication systems. In this article we describe new AOSS laboratory sessions implemented in 2006-2009 for nearly 250 students. The activity goals include boosting familiarity with AO technologies, reinforcing knowledge of optical alignment techniques and the design of optical systems, and encouraging inquiry into critical scientific questions in vision science using AO systems as a research tool. The activities are divided into three stations: Vision Science, Fourier Optics, and the AO Demonstrator. We briefly overview these activities, which are described fully in other articles in these conference proceedings (Putnam et al., Do et al., and Harrington et al., respectively). We devote attention to the unique challenges encountered in the design of these activities, including the marriage of inquiry-like investigation techniques with complex content and the need to tune depth to a graduate- and PhD-level audience. According to before-after surveys conducted in 2008, the vast majority of participants found that all activities were valuable to their careers, although direct experience with integrated, functional AO systems was particularly beneficial.

  18. The Employment Effects of High-Technology: A Case Study of Machine Vision. Research Report No. 86-19.

    ERIC Educational Resources Information Center

    Chen, Kan; Stafford, Frank P.

    A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…

  19. Improving Federal Education Programs through an Integrated Performance and Benchmarking System.

    ERIC Educational Resources Information Center

    Department of Education, Washington, DC. Office of the Under Secretary.

    This document highlights the problems with current federal education program data collection activities and lists several factors that make movement toward a possible solution, then discusses the vision for the Integrated Performance and Benchmarking System (IPBS), a vision of an Internet-based system for harvesting information from states about…

  20. Vision based assistive technology for people with dementia performing activities of daily living (ADLs): an overview

    NASA Astrophysics Data System (ADS)

    As'ari, M. A.; Sheikh, U. U.

    2012-04-01

    The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.

  1. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  2. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  3. Breaking BAD: A Data Serving Vision for Big Active Data

    PubMed Central

    Carey, Michael J.; Jacobs, Steven; Tsotras, Vassilis J.

    2017-01-01

    Virtually all of today’s Big Data systems are passive in nature. Here we describe a project to shift Big Data platforms from passive to active. We detail a vision for a scalable system that can continuously and reliably capture Big Data to enable timely and automatic delivery of new information to a large pool of interested users as well as supporting analyses of historical information. We are currently building a Big Active Data (BAD) system by extending an existing scalable open-source BDMS (AsterixDB) in this active direction. This first paper zooms in on the Data Serving piece of the BAD puzzle, including its key concepts and user model. PMID:29034377

  4. Assessing contextual factors that influence acceptance of pedestrian alerts by a night vision system.

    PubMed

    Källhammer, Jan-Erik; Smith, Kip

    2012-08-01

    We investigated five contextual variables that we hypothesized would influence driver acceptance of alerts to pedestrians issued by a night vision active safety system to inform the specification of the system's alerting strategies. Driver acceptance of automotive active safety systems is a key factor to promote their use and implies a need to assess factors influencing driver acceptance. In a field operational test, 10 drivers drove instrumented vehicles equipped with a preproduction night vision system with pedestrian detection software. In a follow-up experiment, the 10 drivers and 25 additional volunteers without experience with the system watched 57 clips with pedestrian encounters gathered during the field operational test. They rated the acceptance of an alert to each pedestrian encounter. Levels of rating concordance were significant between drivers who experienced the encounters and participants who did not. Two contextual variables, pedestrian location and motion, were found to influence ratings. Alerts were more accepted when pedestrians were close to or moving toward the vehicle's path. The study demonstrates the utility of using subjective driver acceptance ratings to inform the design of active safety systems and to leverage expensive field operational test data within the confines of the laboratory. The design of alerting strategies for active safety systems needs to heed the driver's contextual sensitivity to issued alerts.

  5. The Use of a Tactile-Vision Sensory Substitution System as an Augmentative Tool for Individuals with Visual Impairments

    ERIC Educational Resources Information Center

    Williams, Michael D.; Ray, Christopher T.; Griffith, Jennifer; De l'Aune, William

    2011-01-01

    The promise of novel technological strategies and solutions to assist persons with visual impairments (that is, those who are blind or have low vision) is frequently discussed and held to be widely beneficial in countless applications and daily activities. One such approach involving a tactile-vision sensory substitution modality as a mechanism to…

  6. Operational Assessment of Color Vision

    DTIC Science & Technology

    2016-06-20

    evaluated in this study. 15. SUBJECT TERMS Color vision, aviation, cone contrast test, Colour Assessment & Diagnosis , color Dx, OBVA 16. SECURITY...symbologies are frequently used to aid or direct critical activities such as aircraft landing approaches or railroad right-of-way designations...computer-generated display systems have facilitated the development of computer-based, automated tests of color vision [14,15]. The United Kingdom’s

  7. Effectiveness of portable electronic and optical magnifiers for near vision activities in low vision: a randomised crossover trial.

    PubMed

    Taylor, John J; Bambrick, Rachel; Brand, Andrew; Bray, Nathan; Dutton, Michelle; Harper, Robert A; Hoare, Zoe; Ryan, Barbara; Edwards, Rhiannon T; Waterman, Heather; Dickinson, Christine

    2017-07-01

    To compare the performance of near vision activities using additional portable electronic vision enhancement systems (p-EVES), to using optical magnifiers alone, by individuals with visual impairment. A total of 100 experienced optical aid users were recruited from low vision clinics at Manchester Royal Eye Hospital, Manchester, UK, to a prospective two-arm cross-over randomised controlled trial. Reading, performance of near vision activities, and device usage were evaluated at baseline; and at the end of each study arm (Intervention A: existing optical aids plus p-EVES; Intervention B: optical aids only) which was after 2 and 4 months. A total of 82 participants completed the study. Overall, maximum reading speed for high contrast sentences was not statistically significantly different for optical aids and p-EVES, although the critical print size and threshold print size which could be accessed with p-EVES were statistically significantly smaller (p < 0.001 in both cases). The optical aids were used for a larger number of tasks (p < 0.001), and used more frequently (p < 0.001). However p-EVES were preferred for leisure reading by 70% of participants, and allowed longer duration of reading (p < 0.001). During the study arm when they had a p-EVES device, participants were able to carry out more tasks independently (p < 0.001), and reported less difficulty with a range of near vision activities (p < 0.001). The study provides evidence that p-EVES devices can play a useful role in supplementing the range of low vision aids used to reduce activity limitation for near vision tasks. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  8. Vector disparity sensor with vergence control for active vision systems.

    PubMed

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  9. Vector Disparity Sensor with Vergence Control for Active Vision Systems

    PubMed Central

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737

  10. The 3D laser radar vision processor system

    NASA Astrophysics Data System (ADS)

    Sebok, T. M.

    1990-10-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  11. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  12. Vision Drives Correlated Activity without Patterned Spontaneous Activity in Developing Xenopus Retina

    PubMed Central

    Demas, James A.; Payne, Hannah; Cline, Hollis T.

    2011-01-01

    Developing amphibians need vision to avoid predators and locate food before visual system circuits fully mature. Xenopus tadpoles can respond to visual stimuli as soon as retinal ganglion cells (RGCs) innervate the brain, however, in mammals, chicks and turtles, RGCs reach their central targets many days, or even weeks, before their retinas are capable of vision. In the absence of vision, activity-dependent refinement in these amniote species is mediated by waves of spontaneous activity that periodically spread across the retina, correlating the firing of action potentials in neighboring RGCs. Theory suggests that retinorecipient neurons in the brain use patterned RGC activity to sharpen the retinotopy first established by genetic cues. We find that in both wild type and albino Xenopus tadpoles, RGCs are spontaneously active at all stages of tadpole development studied, but their population activity never coalesces into waves. Even at the earliest stages recorded, visual stimulation dominates over spontaneous activity and can generate patterns of RGC activity similar to the locally correlated spontaneous activity observed in amniotes. In addition, we show that blocking AMPA and NMDA type glutamate receptors significantly decreases spontaneous activity in young Xenopus retina, but that blocking GABAA receptor blockers does not. Our findings indicate that vision drives correlated activity required for topographic map formation. They further suggest that developing retinal circuits in the two major subdivisions of tetrapods, amphibians and amniotes, evolved different strategies to supply appropriately patterned RGC activity to drive visual circuit refinement. PMID:21312343

  13. Vision restoration after brain and retina damage: the "residual vision activation theory".

    PubMed

    Sabel, Bernhard A; Henrich-Noack, Petra; Fedorov, Anton; Gall, Carolin

    2011-01-01

    Vision loss after retinal or cerebral visual injury (CVI) was long considered to be irreversible. However, there is considerable potential for vision restoration and recovery even in adulthood. Here, we propose the "residual vision activation theory" of how visual functions can be reactivated and restored. CVI is usually not complete, but some structures are typically spared by the damage. They include (i) areas of partial damage at the visual field border, (ii) "islands" of surviving tissue inside the blind field, (iii) extrastriate pathways unaffected by the damage, and (iv) downstream, higher-level neuronal networks. However, residual structures have a triple handicap to be fully functional: (i) fewer neurons, (ii) lack of sufficient attentional resources because of the dominant intact hemisphere caused by excitation/inhibition dysbalance, and (iii) disturbance in their temporal processing. Because of this resulting activation loss, residual structures are unable to contribute much to everyday vision, and their "non-use" further impairs synaptic strength. However, residual structures can be reactivated by engaging them in repetitive stimulation by different means: (i) visual experience, (ii) visual training, or (iii) noninvasive electrical brain current stimulation. These methods lead to strengthening of synaptic transmission and synchronization of partially damaged structures (within-systems plasticity) and downstream neuronal networks (network plasticity). Just as in normal perceptual learning, synaptic plasticity can improve vision and lead to vision restoration. This can be induced at any time after the lesion, at all ages and in all types of visual field impairments after retinal or brain damage (stroke, neurotrauma, glaucoma, amblyopia, age-related macular degeneration). If and to what extent vision restoration can be achieved is a function of the amount of residual tissue and its activation state. However, sustained improvements require repetitive stimulation which, depending on the method, may take days (noninvasive brain stimulation) or months (behavioral training). By becoming again engaged in everyday vision, (re)activation of areas of residual vision outlasts the stimulation period, thus contributing to lasting vision restoration and improvements in quality of life. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. 3D vision upgrade kit for the TALON robot system

    NASA Astrophysics Data System (ADS)

    Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott

    2010-02-01

    In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.

  15. Landmark navigation and autonomous landing approach with obstacle detection for aircraft

    NASA Astrophysics Data System (ADS)

    Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.

    1997-06-01

    A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.

  16. Help for the Visually Impaired

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.

  17. 77 FR 16890 - Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-22

    ... Committee 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April 17-19...

  18. 78 FR 5557 - Twenty-First Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-25

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held...

  19. 77 FR 56254 - Twentieth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-12

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October 2-4...

  20. An egocentric vision based assistive co-robot.

    PubMed

    Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang

    2013-06-01

    We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.

  1. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  2. Background staining of visualization systems in immunohistochemistry: comparison of the Avidin-Biotin Complex system and the EnVision+ system.

    PubMed

    Vosse, Bettine A H; Seelentag, Walter; Bachmann, Astrid; Bosman, Fred T; Yan, Pu

    2007-03-01

    The aim of this study was to evaluate specific immunostaining and background staining in formalin-fixed, paraffin-embedded human tissues with the 2 most frequently used immunohistochemical detection systems, Avidin-Biotin-Peroxidase (ABC) and EnVision+. A series of fixed tissues, including breast, colon, kidney, larynx, liver, lung, ovary, pancreas, prostate, stomach, and tonsil, was used in the study. Three monoclonal antibodies, 1 against a nuclear antigen (Ki-67), 1 against a cytoplasmic antigen (cytokeratin), and 1 against a cytoplasmic and membrane-associated antigen and a polyclonal antibody against a nuclear and cytoplasmic antigen (S-100) were selected for these studies. When the ABC system was applied, immunostaining was performed with and without blocking of endogenous avidin-binding activity. The intensity of specific immunostaining and the percentage of stained cells were comparable for the 2 detection systems. The use of ABC caused widespread cytoplasmic and rare nuclear background staining in a variety of normal and tumor cells. A very strong background staining was observed in colon, gastric mucosa, liver, and kidney. Blocking avidin-binding capacity reduced background staining, but complete blocking was difficult to attain. With the EnVision+ system no background staining occurred. Given the efficiency of the detection, equal for both systems or higher with EnVision+, and the significant background problem with ABC, we advocate the routine use of the EnVision+ system.

  3. Active vision in satellite scene analysis

    NASA Technical Reports Server (NTRS)

    Naillon, Martine

    1994-01-01

    In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.

  4. 78 FR 16756 - Twenty-Second Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...

  5. 78 FR 55774 - Twenty Fourth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-11

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October...

  6. 75 FR 17202 - Eighth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-05

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...

  7. 75 FR 44306 - Eleventh Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-28

    ... Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The...

  8. 75 FR 71183 - Twelfth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY...: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will...

  9. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  10. A modeled economic analysis of a digital tele-ophthalmology system as used by three federal health care agencies for detecting proliferative diabetic retinopathy.

    PubMed

    Whited, John D; Datta, Santanu K; Aiello, Lloyd M; Aiello, Lloyd P; Cavallerano, Jerry D; Conlin, Paul R; Horton, Mark B; Vigersky, Robert A; Poropatich, Ronald K; Challa, Pratap; Darkins, Adam W; Bursell, Sven-Erik

    2005-12-01

    The objective of this study was to compare, using a 12-month time frame, the cost-effectiveness of a non-mydriatic digital tele-ophthalmology system (Joslin Vision Network) versus traditional clinic-based ophthalmoscopy examinations with pupil dilation to detect proliferative diabetic retinopathy and its consequences. Decision analysis techniques, including Monte Carlo simulation, were used to model the use of the Joslin Vision Network versus conventional clinic-based ophthalmoscopy among the entire diabetic populations served by the Indian Health Service, the Department of Veterans Affairs, and the active duty Department of Defense. The economic perspective analyzed was that of each federal agency. Data sources for costs and outcomes included the published literature, epidemiologic data, administrative data, market prices, and expert opinion. Outcome measures included the number of true positive cases of proliferative diabetic retinopathy detected, the number of patients treated with panretinal laser photocoagulation, and the number of cases of severe vision loss averted. In the base-case analyses, the Joslin Vision Network was the dominant strategy in all but two of the nine modeled scenarios, meaning that it was both less costly and more effective. In the active duty Department of Defense population, the Joslin Vision Network would be more effective but cost an extra 1,618 dollars per additional patient treated with panretinal laser photo-coagulation and an additional 13,748 dollars per severe vision loss event averted. Based on our economic model, the Joslin Vision Network has the potential to be more effective than clinic-based ophthalmoscopy for detecting proliferative diabetic retinopathy and averting cases of severe vision loss, and may do so at lower cost.

  11. KSC-04PD-2634

    NASA Technical Reports Server (NTRS)

    2004-01-01

    KENNEDY SPACE CENTER, FLA. Adm. Craig Steidle, associate administrator for Exploration Systems, speaks to attendees of the One NASA Leader-Led Workshop about the Agency plan for achieving the Vision for Space Exploration. The workshop included senior leadership in the Agency who talked about ongoing Transformation activities and Kennedys role in the Vision for Space Exploration.

  12. Active Voodoo Dolls: A Vision Based Input Device for Nonrigid Control.

    DTIC Science & Technology

    1998-08-01

    A vision based technique for nonrigid control is presented that can be used for animation and video game applications. The user grasps a soft...allowing the user to control it interactively. Our use of texture mapping hardware in tracking makes the system responsive enough for interactive animation and video game character control.

  13. Real-time tracking using stereo and motion: Visual perception for space robotics

    NASA Technical Reports Server (NTRS)

    Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann

    1994-01-01

    The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.

  14. 76 FR 11847 - Thirteenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-03

    ... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...

  15. 76 FR 20437 - Fourteenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-12

    ... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...

  16. Robotic vision. [process control applications

    NASA Technical Reports Server (NTRS)

    Williams, D. S.; Wilf, J. M.; Cunningham, R. T.; Eskenazi, R.

    1979-01-01

    Robotic vision, involving the use of a vision system to control a process, is discussed. Design and selection of active sensors employing radiation of radio waves, sound waves, and laser light, respectively, to light up unobservable features in the scene are considered, as are design and selection of passive sensors, which rely on external sources of illumination. The segmentation technique by which an image is separated into different collections of contiguous picture elements having such common characteristics as color, brightness, or texture is examined, with emphasis on the edge detection technique. The IMFEX (image feature extractor) system performing edge detection and thresholding at 30 frames/sec television frame rates is described. The template matching and discrimination approach to recognize objects are noted. Applications of robotic vision in industry for tasks too monotonous or too dangerous for the workers are mentioned.

  17. Comparison of occlusion break responses and vacuum rise times of phacoemulsification systems.

    PubMed

    Sharif-Kashani, Pooria; Fanney, Douglas; Injev, Val

    2014-07-30

    Occlusion break surge during phacoemulsification cataract surgery can lead to potential surgical complications. The purpose of this study was to quantify occlusion break surge and vacuum rise time of current phacoemulsification systems used in cataract surgery. Occlusion break surge at vacuum pressures between 200 and 600 mmHg was assessed with the Infiniti® Vision System, the WhiteStar Signature® Phacoemulsification System, and the Centurion® Vision System using gravity-fed fluidics. Centurion Active FluidicsTM were also tested at multiple intraoperative pressure target settings. Vacuum rise time was evaluated for Infiniti, WhiteStar Signature, Centurion, and Stellaris® Vision Enhancement systems. Rise time to vacuum limits of 400 and 600 mmHg was assessed at flow rates of 30 and 60 cc/minute. Occlusion break surge was analyzed by 2-way analysis of variance. The Centurion system exhibited substantially less occlusion break surge than the other systems tested. Surge area with Centurion Active Fluidics was similar to gravity fluidics at an equivalent bottle height. At all Centurion Active Fluidics intraoperative pressure target settings tested, surge was smaller than with Infiniti and WhiteStar Signature. Infiniti had the fastest vacuum rise time and Stellaris had the slowest. No system tested reached the 600-mmHg vacuum limit. In this laboratory study, Centurion had the least occlusion break surge and similar vacuum rise times compared with the other systems tested. Reducing occlusion break surge may increase safety of phacoemulsification cataract surgery.

  18. Comparison of occlusion break responses and vacuum rise times of phacoemulsification systems

    PubMed Central

    2014-01-01

    Background Occlusion break surge during phacoemulsification cataract surgery can lead to potential surgical complications. The purpose of this study was to quantify occlusion break surge and vacuum rise time of current phacoemulsification systems used in cataract surgery. Methods Occlusion break surge at vacuum pressures between 200 and 600 mmHg was assessed with the Infiniti® Vision System, the WhiteStar Signature® Phacoemulsification System, and the Centurion® Vision System using gravity-fed fluidics. Centurion Active FluidicsTM were also tested at multiple intraoperative pressure target settings. Vacuum rise time was evaluated for Infiniti, WhiteStar Signature, Centurion, and Stellaris® Vision Enhancement systems. Rise time to vacuum limits of 400 and 600 mmHg was assessed at flow rates of 30 and 60 cc/minute. Occlusion break surge was analyzed by 2-way analysis of variance. Results The Centurion system exhibited substantially less occlusion break surge than the other systems tested. Surge area with Centurion Active Fluidics was similar to gravity fluidics at an equivalent bottle height. At all Centurion Active Fluidics intraoperative pressure target settings tested, surge was smaller than with Infiniti and WhiteStar Signature. Infiniti had the fastest vacuum rise time and Stellaris had the slowest. No system tested reached the 600-mmHg vacuum limit. Conclusions In this laboratory study, Centurion had the least occlusion break surge and similar vacuum rise times compared with the other systems tested. Reducing occlusion break surge may increase safety of phacoemulsification cataract surgery. PMID:25074069

  19. Managing the Organizational Vision, Mission, and Planning: Five Steps toward a Successful Leadership Strategy

    ERIC Educational Resources Information Center

    Ricci, Frederick A.

    2011-01-01

    The vision of academic and business leaders often sets a pattern for all activities to follow. Organizational objectives, tasks, performance, and uses of resources are needed to obtain and reach current and future society trends influencing organizational goals. This paper identifies 5 steps leaders need to create a systems approach toward…

  20. Dynamically re-configurable CMOS imagers for an active vision system

    NASA Technical Reports Server (NTRS)

    Yang, Guang (Inventor); Pain, Bedabrata (Inventor)

    2005-01-01

    A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.

  1. Pixel-wise deblurring imaging system based on active vision for structural health monitoring at a speed of 100 km/h

    NASA Astrophysics Data System (ADS)

    Hayakawa, Tomohiko; Moko, Yushi; Morishita, Kenta; Ishikawa, Masatoshi

    2018-04-01

    In this paper, we propose a pixel-wise deblurring imaging (PDI) system based on active vision for compensation of the blur caused by high-speed one-dimensional motion between a camera and a target. The optical axis is controlled by back-and-forth motion of a galvanometer mirror to compensate the motion. High-spatial-resolution image captured by our system in high-speed motion is useful for efficient and precise visual inspection, such as visually judging abnormal parts of a tunnel surface to prevent accidents; hence, we applied the PDI system for structural health monitoring. By mounting the system onto a vehicle in a tunnel, we confirmed significant improvement in image quality for submillimeter black-and-white stripes and real tunnel-surface cracks at a speed of 100 km/h.

  2. Mobility and orientation aid for blind persons using artificial vision

    NASA Astrophysics Data System (ADS)

    Costa, Gustavo; Gusberti, Adrián; Graffigna, Juan Pablo; Guzzo, Martín; Nasisi, Oscar

    2007-11-01

    Blind or vision-impaired persons are limited in their normal life activities. Mobility and orientation of blind persons is an ever-present research subject because no total solution has yet been reached for these activities that pose certain risks for the affected persons. The current work presents the design and development of a device conceived on capturing environment information through stereoscopic vision. The images captured by a couple of video cameras are transferred and processed by configurable and sequential FPGA and DSP devices that issue action signals to a tactile feedback system. Optimal processing algorithms are implemented to perform this feedback in real time. The components selected permit portability; that is, to readily get used to wearing the device.

  3. 77 FR 2342 - Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation..., Enhanced Flight Vision/ Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of the seventeenth meeting of RTCA Special Committee 213, Enhanced Flight Vision...

  4. Potato Operation: automatic detection of potato diseases

    NASA Astrophysics Data System (ADS)

    Lefebvre, Marc; Zimmerman, Thierry; Baur, Charles; Guegerli, Paul; Pun, Thierry

    1995-01-01

    The Potato Operation is a collaborative, multidisciplinary project in the domain of destructive testing of agricultural products. It aims at automatizing pulp sampling of potatoes in order to detect possible viral diseases. Such viruses can decrease fields productivity by a factor of up to ten. A machine, composed of three conveyor belts, a vision system, a robotic arm and controlled by a PC has been built. Potatoes are brought one by one from a bulk to the vision system, where they are seized by a rotating holding device. The sprouts, where the viral activity is maximum, are then detected by an active vision process operating on multiple views. The 3D coordinates of the sampling point are communicated to the robot arm holding a drill. Some flesh is then sampled by the drill, then deposited into an Elisa plate. After sampling, the robot arm washes the drill in order to prevent any contamination. The PC computer simultaneously controls these processes, the conveying of the potatoes, the vision algorithms and the sampling procedure. The master process, that is the vision procedure, makes use of three methods to achieve the sprouts detection. A profile analysis first locates the sprouts as protuberances. Two frontal analyses, respectively based on fluorescence and local variance, confirm the previous detection and provide the 3D coordinate of the sampling zone. The other two processes work by interruption of the master process.

  5. Using a Curricular Vision to Define Entrustable Professional Activities for Medical Student Assessment.

    PubMed

    Hauer, Karen E; Boscardin, Christy; Fulton, Tracy B; Lucey, Catherine; Oza, Sandra; Teherani, Arianne

    2015-09-01

    The new UCSF Bridges Curriculum aims to prepare students to succeed in today's health care system while simultaneously improving it. Curriculum redesign requires assessment strategies that ensure that graduates achieve competence in enduring and emerging skills for clinical practice. To design entrustable professional activities (EPAs) for assessment in a new curriculum and gather evidence of content validity. University of California, San Francisco, School of Medicine. Nineteen medical educators participated; 14 completed both rounds of a Delphi survey. Authors describe 5 steps for defining EPAs that encompass a curricular vision including refining the vision, defining draft EPAs, developing EPAs and assessment strategies, defining competencies and milestones, and mapping milestones to EPAs. A Q-sort activity and Delphi survey involving local medical educators created consensus and prioritization for milestones for each EPA. For 4 EPAs, most milestones had content validity indices (CVIs) of at least 78 %. For 2 EPAs, 2 to 4 milestones did not achieve CVIs of 78 %. We demonstrate a stepwise procedure for developing EPAs that capture essential physician work activities defined by a curricular vision. Structured procedures for soliciting faculty feedback and mapping milestones to EPAs provide content validity.

  6. 75 FR 38863 - Tenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-06

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight [[Page 38864

  7. Research and Development at NASA

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Vision for Space Exploration marks the next segment of NASA's continuing journey to find answers to compelling questions about the origins of the solar system, the existence of life beyond Earth, and the ability of humankind to live on other worlds. The success of the Vision relies upon the ongoing research and development activities conducted at each of NASA's 10 field centers. In an effort to promote synergy across NASA as it works to meet its long-term goals, the Agency restructured its Strategic Enterprises into four Mission Directorates that align with the Vision. Consisting of Exploration Systems, Space Operations, Science, and Aeronautics Research, these directorates provide NASA Headquarters and the field centers with a streamlined approach to continue exploration both in space and on Earth.

  8. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  9. Photopic transduction implicated in human circadian entrainment

    NASA Technical Reports Server (NTRS)

    Zeitzer, J. M.; Kronauer, R. E.; Czeisler, C. A.

    1997-01-01

    Despite the preeminence of light as the synchronizer of the circadian timing system, the phototransductive machinery in mammals which transmits photic information from the retina to the hypothalamic circadian pacemaker remains largely undefined. To determine the class of photopigments which this phototransductive system uses, we exposed a group (n = 7) of human subjects to red light below the sensitivity threshold of a scotopic (i.e. rhodopsin/rod-based) system, yet of sufficient strength to activate a photopic (i.e. cone-based) system. Exposure to this light stimulus was sufficient to reset significantly the human circadian pacemaker, indicating that the cone pigments which mediate color vision can also mediate circadian vision.

  10. INTRODUCTION TO THE MOVEMENT SYSTEM AS THE FOUNDATION FOR PHYSICAL THERAPIST PRACTICE EDUCATION AND RESEARCH.

    PubMed

    Saladin, Lisa; Voight, Michael

    2017-11-01

    In 2013, the American Physical Therapy Association (APTA) adopted an inspiring new vision, "Transforming society by optimizing movement to improve the human experience." This new vision for our profession calls us to action as physical therapists to transform society by using our skills, knowledge, and expertise related to the movement system in order to optimize movement, promote health and wellness, mitigate the progression of impairments, and prevent the development of (additional) disability. The guiding principle of the new vision is "identity," which can be summarized as "The physical therapy profession will define and promote the movement system as the foundation for optimizing movement to improve the health of society." Recognition and validation of the movement system is essential to understand the structure, function, and potential of the human body. As currently defined, the "movement system" represents the collection of systems (cardiovascular, pulmonary, endocrine, integumentary, nervous, and musculoskeletal) that interact to move the body or its component parts. By better characterizing physical therapists as movement system experts, we seek to solidify our professional identity within the medical community and society. The physical therapist will be responsible for evaluating and managing an individual's movement system across the lifespan to promote optimal development; diagnose impairments, activity limitations, and participation restrictions; and provide interventions targeted at preventing or ameliorating activity limitations and participation restrictions. 5.

  11. Capacity building in e-health and health informatics: a review of the global vision and informatics educational initiatives of the American Medical Informatics Association.

    PubMed

    Detmer, D E

    2010-01-01

    Substantial global and national commitment will be required for current healthcare systems and health professional practices to become learning care systems utilizing information and communications technology (ICT) empowered by informatics. To engage this multifaceted challenge, a vision is required that shifts the emphasis from silos of activities toward integrated systems. Successful systems will include a set of essential elements, e.g., a sufficient ICT infrastructure, evolving health care processes based on evidence and harmonized to local cultures, a fresh view toward educational preparation, sound and sustained policy support, and ongoing applied research and development. Increasingly, leaders are aware that ICT empowered by informatics must be an integral part of their national and regional visions. This paper sketches out the elements of what is needed in terms of objectives and some steps toward achieving them. It summarizes some of the progress that has been made to date by the American and International Medical Informatics Associations working separately as well as collaborating to conceptualize informatics capacity building in order to bring this vision to reality in low resource nations in particular.

  12. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    PubMed

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  13. Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis

    NASA Astrophysics Data System (ADS)

    Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario

    2015-12-01

    Objective. Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. Approach. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. Main results. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. Significance. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.

  14. Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis.

    PubMed

    Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario

    2015-12-01

    Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.

  15. The role of the positive emotional attractor in vision and shared vision: toward effective leadership, relationships, and engagement

    PubMed Central

    Boyatzis, Richard E.; Rochford, Kylie; Taylor, Scott N.

    2015-01-01

    Personal and shared vision have a long history in management and organizational practices yet only recently have we begun to build a systematic body of empirical knowledge about the role of personal and shared vision in organizations. As the introductory paper for this special topic in Frontiers in Psychology, we present a theoretical argument as to the existence and critical role of two states in which a person, dyad, team, or organization may find themselves when engaging in the creation of a personal or shared vision: the positive emotional attractor (PEA) and the negative emotional attractor (NEA). These two primary states are strange attractors, each characterized by three dimensions: (1) positive versus negative emotional arousal; (2) endocrine arousal of the parasympathetic nervous system versus sympathetic nervous system; and (3) neurological activation of the default mode network versus the task positive network. We argue that arousing the PEA is critical when creating or affirming a personal vision (i.e., sense of one’s purpose and ideal self). We begin our paper by reviewing the underpinnings of our PEA–NEA theory, briefly review each of the papers in this special issue, and conclude by discussing the practical implications of the theory. PMID:26052300

  16. [Effectiveness of magnetotherapy in optic nerve atrophy. A preliminary study].

    PubMed

    Zobina, L V; Orlovskaia, L S; Sokov, S L; Sabaeva, G F; Kondé, L A; Iakovlev, A A

    1990-01-01

    Magnetotherapy effects on visual functions (vision acuity and field), on retinal bioelectric activity, on conductive vision system, and on intraocular circulation were studied in 88 patients (160 eyes) with optic nerve atrophy. A Soviet Polyus-1 low-frequency magnetotherapy apparatus was employed with magnetic induction of about 10 mT, exposure 7-10 min, 10-15 sessions per course. Vision acuity of patients with its low (below 0.04 diopters) values improved in 50 percent of cases. The number of patients with vision acuity of 0.2 diopters has increased from 46 before treatment to 75. Magnetotherapy improved ocular hemodynamics in patients with optic nerve atrophy, it reduced the time of stimulation conduction along the vision routes and stimulated the retinal ganglia cells. The maximal effect was achieved after 10 magnetotherapy sessions. A repeated course carried out in 6-8 months promoted a stabilization of the process.

  17. GPS Usage in a Population of Low-Vision Drivers.

    PubMed

    Cucuras, Maria; Chun, Robert; Lee, Patrick; Jay, Walter M; Pusateri, Gregg

    2017-01-01

    We surveyed bioptic and non-bioptic low-vision drivers in Illinois, USA, to determine their usage of global positioning system (GPS) devices. Low-vision patients completed an IRB-approved phone survey regarding driving demographics and usage of GPS while driving. Participants were required to be active drivers with an Illinois driver's license, and met one of the following criteria: best-corrected visual acuity (BCVA) less than or equal to 20/40, central or significant peripheral visual field defects, or a combination of both. Of 27 low-vision drivers, 10 (37%) used GPS while driving. The average age for GPS users was 54.3 and for non-users was 77.6. All 10 drivers who used GPS while driving reported increased comfort or safety level. Since non-GPS users were significantly older than GPS users, it is likely that older participants would benefit from GPS technology training from their low-vision eye care professionals.

  18. Retinal Structures and Visual Cortex Activity are Impaired Prior to Clinical Vision Loss in Glaucoma.

    PubMed

    Murphy, Matthew C; Conner, Ian P; Teng, Cindy Y; Lawrence, Jesse D; Safiullah, Zaid; Wang, Bo; Bilonick, Richard A; Kim, Seong-Gi; Wollstein, Gadi; Schuman, Joel S; Chan, Kevin C

    2016-08-11

    Glaucoma is the second leading cause of blindness worldwide and its pathogenesis remains unclear. In this study, we measured the structure, metabolism and function of the visual system by optical coherence tomography and multi-modal magnetic resonance imaging in healthy subjects and glaucoma patients with different degrees of vision loss. We found that inner retinal layer thinning, optic nerve cupping and reduced visual cortex activity occurred before patients showed visual field impairment. The primary visual cortex also exhibited more severe functional deficits than higher-order visual brain areas in glaucoma. Within the visual cortex, choline metabolism was perturbed along with increasing disease severity in the eye, optic radiation and visual field. In summary, this study showed evidence that glaucoma deterioration is already present in the eye and the brain before substantial vision loss can be detected clinically using current testing methods. In addition, cortical cholinergic abnormalities are involved during trans-neuronal degeneration and can be detected non-invasively in glaucoma. The current results can be of impact for identifying early glaucoma mechanisms, detecting and monitoring pathophysiological events and eye-brain-behavior relationships, and guiding vision preservation strategies in the visual system, which may help reduce the burden of this irreversible but preventable neurodegenerative disease.

  19. Retinal Structures and Visual Cortex Activity are Impaired Prior to Clinical Vision Loss in Glaucoma

    PubMed Central

    Murphy, Matthew C.; Conner, Ian P.; Teng, Cindy Y.; Lawrence, Jesse D.; Safiullah, Zaid; Wang, Bo; Bilonick, Richard A.; Kim, Seong-Gi; Wollstein, Gadi; Schuman, Joel S.; Chan, Kevin C.

    2016-01-01

    Glaucoma is the second leading cause of blindness worldwide and its pathogenesis remains unclear. In this study, we measured the structure, metabolism and function of the visual system by optical coherence tomography and multi-modal magnetic resonance imaging in healthy subjects and glaucoma patients with different degrees of vision loss. We found that inner retinal layer thinning, optic nerve cupping and reduced visual cortex activity occurred before patients showed visual field impairment. The primary visual cortex also exhibited more severe functional deficits than higher-order visual brain areas in glaucoma. Within the visual cortex, choline metabolism was perturbed along with increasing disease severity in the eye, optic radiation and visual field. In summary, this study showed evidence that glaucoma deterioration is already present in the eye and the brain before substantial vision loss can be detected clinically using current testing methods. In addition, cortical cholinergic abnormalities are involved during trans-neuronal degeneration and can be detected non-invasively in glaucoma. The current results can be of impact for identifying early glaucoma mechanisms, detecting and monitoring pathophysiological events and eye-brain-behavior relationships, and guiding vision preservation strategies in the visual system, which may help reduce the burden of this irreversible but preventable neurodegenerative disease. PMID:27510406

  20. Leisure Activity Participation of Elderly Individuals with Low Vision.

    ERIC Educational Resources Information Center

    Heinemann, Allen W.

    1988-01-01

    Studied low vision elderly clinic patients (N=63) who reported participation in six categories of leisure activities currently and at onset of vision loss. Found subjects reported significant declines in five of six activity categories. Found prior activity participation was related to current participation only for active crafts, participatory…

  1. Low Gravity Materials Science Research for Space Exploration

    NASA Technical Reports Server (NTRS)

    Clinton, R. G., Jr.; Semmes, Edmund B.; Schlagheck, Ronald A.; Bassler, Julie A.; Cook, Mary Beth; Wargo, Michael J.; Sanders, Gerald B.; Marzwell, Neville I.

    2004-01-01

    On January 14, 2004, the President of the United States announced a new vision for the United States civil space program. The Administrator of the National Aeronautics and Space Administration (NASA) has the responsibility to implement this new vision. The President also created a Presidential Commission 'to obtain recommendations concerning implementation of the new vision for space exploration.' The President's Commission recognized that achieving the exploration objectives would require significant technical innovation, research, and development in focal areas defined as 'enabling technologies.' Among the 17 enabling technologies identified for initial focus were advanced structures; advanced power and propulsion; closed-loop life support and habitability; extravehicular activity system; autonomous systems and robotics; scientific data collection and analysis; biomedical risk mitigation; and planetary in situ resource utilization. The Commission also recommended realignment of NASA Headquarters organizations to support the vision for space exploration. NASA has aggressively responded in its planning to support the vision for space exploration and with the current considerations of the findings and recommendations from the Presidential Commission. This presentation will examine the transformation and realignment activities to support the vision for space exploration that are underway in the microgravity materials science program. The heritage of the microgravity materials science program, in the context of residence within the organizational structure of the Office of Biological and Physical Research, and thematic and sub-discipline based research content areas, will be briefly examined as the starting point for the ongoing transformation. Overviews of future research directions will be presented and the status of organizational restructuring at NASA Headquarters, with respect to influences on the microgravity materials science program, will be discussed. Additional information is included in the original extended abstract.

  2. Portable electronic vision enhancement systems in comparison with optical magnifiers for near vision activities: an economic evaluation alongside a randomized crossover trial.

    PubMed

    Bray, Nathan; Brand, Andrew; Taylor, John; Hoare, Zoe; Dickinson, Christine; Edwards, Rhiannon T

    2017-08-01

    To determine the incremental cost-effectiveness of portable electronic vision enhancement system (p-EVES) devices compared with optical low vision aids (LVAs), for improving near vision visual function, quality of life and well-being of people with a visual impairment. An AB/BA randomized crossover trial design was used. Eighty-two participants completed the study. Participants were current users of optical LVAs who had not tried a p-EVES device before and had a stable visual impairment. The trial intervention was the addition of a p-EVES device to the participant's existing optical LVA(s) for 2 months, and the control intervention was optical LVA use only, for 2 months. Cost-effectiveness and cost-utility analyses were conducted from a societal perspective. The mean cost of the p-EVES intervention was £448. Carer costs were £30 (4.46 hr) less for the p-EVES intervention compared with the LVA only control. The mean difference in total costs was £417. Bootstrapping gave an incremental cost-effectiveness ratio (ICER) of £736 (95% CI £481 to £1525) for a 7% improvement in near vision visual function. Cost per quality-adjusted life year (QALY) ranged from £56 991 (lower 95% CI = £19 801) to £66 490 (lower 95% CI = £23 055). Sensitivity analysis varying the commercial price of the p-EVES device reduced ICERs by up to 75%, with cost per QALYs falling below £30 000. Portable electronic vision enhancement system (p-EVES) devices are likely to be a cost-effective use of healthcare resources for improving near vision visual function, but this does not translate into cost-effective improvements in quality of life, capability or well-being. © 2016 The Authors. Acta Ophthalmologica published by John Wiley & Sons Ltd on behalf of Acta Ophthalmologica Scandinavica Foundation and European Association for Vision & Eye Research.

  3. Exploration EVA System

    NASA Technical Reports Server (NTRS)

    Kearney, Lara

    2004-01-01

    In January 2004, the President announced a new Vision for Space Exploration. NASA's Office of Exploration Systems has identified Extravehicular Activity (EVA) as a critical capability for supporting the Vision for Space Exploration. EVA is required for all phases of the Vision, both in-space and planetary. Supporting the human outside the protective environment of the vehicle or habitat and allow ing him/her to perform efficient and effective work requires an integrated EVA "System of systems." The EVA System includes EVA suits, airlocks, tools and mobility aids, and human rovers. At the core of the EVA System is the highly technical EVA suit, which is comprised mainly of a life support system and a pressure/environmental protection garment. The EVA suit, in essence, is a miniature spacecraft, which combines together many different sub-systems such as life support, power, communications, avionics, robotics, pressure systems and thermal systems, into a single autonomous unit. Development of a new EVA suit requires technology advancements similar to those required in the development of a new space vehicle. A majority of the technologies necessary to develop advanced EVA systems are currently at a low Technology Readiness Level of 1-3. This is particularly true for the long-pole technologies of the life support system.

  4. Hi-Vision telecine system using pickup tube

    NASA Astrophysics Data System (ADS)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  5. Visual tracking strategies for intelligent vehicle highway systems

    NASA Astrophysics Data System (ADS)

    Smith, Christopher E.; Papanikolopoulos, Nikolaos P.; Brandt, Scott A.; Richards, Charles

    1995-01-01

    The complexity and congestion of current transportation systems often produce traffic situations that jeopardize the safety of the people involved. These situations vary from maintaining a safe distance behind a leading vehicle to safely allowing a pedestrian to cross a busy street. Environmental sensing plays a critical role in virtually all of these situations. Of the sensors available, vision sensors provide information that is richer and more complete than other sensors, making them a logical choice for a multisensor transportation system. In this paper we present robust techniques for intelligent vehicle-highway applications where computer vision plays a crucial role. In particular, we demonstrate that the controlled active vision framework can be utilized to provide a visual sensing modality to a traffic advisory system in order to increase the overall safety margin in a variety of common traffic situations. We have selected two application examples, vehicle tracking and pedestrian tracking, to demonstrate that the framework can provide precisely the type of information required to effectively manage the given situation.

  6. A Practical Solution Using A New Approach To Robot Vision

    NASA Astrophysics Data System (ADS)

    Hudson, David L.

    1984-01-01

    Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.

  7. Vision Restoration in Glaucoma by Activating Residual Vision with a Holistic, Clinical Approach: A Review.

    PubMed

    Sabel, Bernhard A; Cárdenas-Morales, Lizbeth; Gao, Ying

    2018-01-01

    How to cite this article: Sabel BA, Cárdenas-Morales L, Gao Y. Vision Restoration in Glaucoma by activating Residual Vision with a Holistic, Clinical Approach: A Review. J Curr Glaucoma Pract 2018;12(1):1-9.

  8. Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor

    NASA Astrophysics Data System (ADS)

    Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu

    In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.

  9. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  10. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  11. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  12. Vision and Vestibular System Dysfunction Predicts Prolonged Concussion Recovery in Children.

    PubMed

    Master, Christina L; Master, Stephen R; Wiebe, Douglas J; Storey, Eileen P; Lockyer, Julia E; Podolak, Olivia E; Grady, Matthew F

    2018-03-01

    Up to one-third of children with concussion have prolonged symptoms lasting beyond 4 weeks. Vision and vestibular dysfunction is common after concussion. It is unknown whether such dysfunction predicts prolonged recovery. We sought to determine which vision or vestibular problems predict prolonged recovery in children. A retrospective cohort of pediatric patients with concussion. A subspecialty pediatric concussion program. Four hundred thirty-two patient records were abstracted. Presence of vision or vestibular dysfunction upon presentation to the subspecialty concussion program. The main outcome of interest was time to clinical recovery, defined by discharge from clinical follow-up, including resolution of acute symptoms, resumption of normal physical and cognitive activity, and normalization of physical examination findings to functional levels. Study subjects were 5 to 18 years (median = 14). A total of 378 of 432 subjects (88%) presented with vision or vestibular problems. A history of motion sickness was associated with vestibular dysfunction. Younger age, public insurance, and presence of headache were associated with later presentation for subspecialty concussion care. Vision and vestibular problems were associated within distinct clusters. Provocable symptoms with vestibulo-ocular reflex (VOR) and smooth pursuits and abnormal balance and accommodative amplitude (AA) predicted prolonged recovery time. Vision and vestibular problems predict prolonged concussion recovery in children. A history of motion sickness may be an important premorbid factor. Public insurance status may represent problems with disparities in access to concussion care. Vision assessments in concussion must include smooth pursuits, saccades, near point of convergence (NPC), and accommodative amplitude (AA). A comprehensive, multidomain assessment is essential to predict prolonged recovery time and enable active intervention with specific school accommodations and targeted rehabilitation.

  13. Color image processing and vision system for an automated laser paint-stripping system

    NASA Astrophysics Data System (ADS)

    Hickey, John M., III; Hise, Lawson

    1994-10-01

    Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.

  14. Vision impairment and corrective considerations of civil airmen.

    PubMed

    Nakagawara, V B; Wood, K J; Montgomery, R W

    1995-08-01

    Civil aviation is a major commercial and technological industry in the United States. The Federal Aviation Administration (FAA) is responsible for the regulation and promotion of aviation safety in the National Airspace System. To guide FAA policy changes and educational programs for aviation personnel about vision impairment and the use of corrective ophthalmic devices, the demographics of the civil airman population were reviewed. Demographic data from 1971-1991 were extracted from FAA publications and databases. Approximately 48 percent of the civil airman population is equal to or older than 40 years of age (average age = 39.8 years). Many of these aviators are becoming presbyopic and will need corrective devices for near and intermediate vision. In fact, there has been approximately a 12 percent increase in the number of aviators with near vision restrictions during the past decade. Ophthalmic considerations for prescribing and dispensing eyewear for civil aviators are discussed. The correction of near and intermediate vision conditions for older pilots will be a major challenge for eye care practitioners in the next decade. Knowledge of the unique vision and environmental requirements of the civilian airman can assist clinicians in suggesting alternative vision corrective devices better suited for a particular aviation activity.

  15. The effect of gender and level of vision on the physical activity level of children and adolescents with visual impairment.

    PubMed

    Aslan, Ummuhan Bas; Calik, Bilge Basakcı; Kitiş, Ali

    2012-01-01

    This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between 8 and 16 years participated in the study. The physical activity level of cases was evaluated with a physical activity diary (PAD) and one-mile run/walk test (OMR-WT). No difference was found between the PAD and the OMR-WT results of low vision and blind children and adolescents. The visually impaired children and adolescents were detected not to participate in vigorous physical activity. A difference was found in favor of low vision boys in terms of mild, moderate activities and OMR-WT durations. However, no difference was found between physical activity levels of blind girls and boys. The results of our study suggested that the physical activity level of visually impaired children and adolescents was low, and gender affected physical activity in low vision children and adolescents. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. The hydrogen technology assessment, phase 1

    NASA Technical Reports Server (NTRS)

    Bain, Addison

    1991-01-01

    The purpose of this phase 1 report is to begin to form the information base of the economics and energy uses of hydrogen-related technologies on which the members of the National Hydrogen Association (NHA) can build a hydrogen vision of the future. The secondary goal of this report is the development of NHA positions on national research, development, and demonstration opportunities. The third goal, with the aid of the established hydrogen vision and NHA positions, is to evaluate ongoing federal research goals and activities. The evaluations will be performed in a manner that compares the costs associated with using systems that achieve those goals against the cost of performing those tasks today with fossil fuels. From this ongoing activity should emerge an NHA information base, one or more hydrogen visions of the future, and cost and performance targets for hydrogen applications to complete in the market place.

  17. Functional preservation and variation in the cone opsin genes of nocturnal tarsiers

    PubMed Central

    Ong, Perry S.; Perry, George H.

    2017-01-01

    The short-wavelength sensitive (S-) opsin gene OPN1SW is pseudogenized in some nocturnal primates and retained in others, enabling dichromatic colour vision. Debate on the functional significance of this variation has focused on dark conditions, yet many nocturnal species initiate activity under dim (mesopic) light levels that can support colour vision. Tarsiers are nocturnal, twilight-active primates and exemplary visual predators; they also express different colour vision phenotypes, raising the possibility of discrete adaptations to mesopic conditions. To explore this premise, we conducted a field study in two stages. First, to estimate the level of functional constraint on colour vision, we sequenced OPN1SW in 12 wild-caught Philippine tarsiers (Tarsius syrichta). Second, to explore whether the dichromatic visual systems of Philippine and Bornean (Tarsius bancanus) tarsiers—which express alternate versions of the medium/long-wavelength sensitive (M/L-) opsin gene OPN1MW/OPN1LW—confer differential advantages specific to their respective habitats, we used twilight and moonlight conditions to model the visual contrasts of invertebrate prey. We detected a signature of purifying selection for OPN1SW, indicating that colour vision confers an adaptive advantage to tarsiers. However, this advantage extends to a relatively small proportion of prey–background contrasts, and mostly brown arthropod prey amid leaf litter. We also found that the colour vision of T. bancanus is advantageous for discriminating prey under twilight that is enriched in shorter (bluer) wavelengths, a plausible idiosyncrasy of understorey habitats in Borneo. This article is part of the themed issue ‘Vision in dim light’. PMID:28193820

  18. Wearable Improved Vision System for Color Vision Deficiency Correction

    PubMed Central

    Riccio, Daniel; Di Perna, Luigi; Sanniti Di Baja, Gabriella; De Nino, Maurizio; Rossi, Settimio; Testa, Francesco; Simonelli, Francesca; Frucci, Maria

    2017-01-01

    Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p = 0.03$ \\end{document}) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD. PMID:28507827

  19. Impact of age-related macular degeneration in patients with glaucoma: understanding the patients' perspective.

    PubMed

    Skalicky, Simon E; Fenwick, Eva; Martin, Keith R; Crowston, Jonathan; Goldberg, Ivan; McCluskey, Peter

    2016-07-01

    The aim of the study is to measure the impact of age-related macular degeneration on vision-related activity limitation and preference-based status for glaucoma patients. This was a cross-sectional study. Two-hundred glaucoma patients of whom 73 had age-related macular degeneration were included in the research. Sociodemographic information, visual field parameters and visual acuity were collected. Age-related macular degeneration was scored using the Age-Related Eye Disease Study system. The Rasch-analysed Glaucoma Activity Limitation-9 and the Visual Function Questionnaire Utility Index measured vision-related activity limitation and preference-based status, respectively. Regression models determined factors predictive of vision-related activity limitation and preference-based status. Differential item functioning compared Glaucoma Activity Limitation-9 item difficulty for those with and without age-related macular degeneration. Mean age was 73.7 (±10.1) years. Lower better eye mean deviation (β: 1.42, 95% confidence interval: 1.24-1.63, P < 0.001) and age-related macular degeneration (β: 1.26 95% confidence interval: 1.10-1.44, P = 0.001) were independently associated with worse vision-related activity limitation. Worse eye visual acuity (β: 0.978, 95% confidence interval: 0.961-0.996, P = 0.018), high risk age-related macular degeneration (β: 0.981, 95% confidence interval: 0.965-0.998, P = 0.028) and severe glaucoma (β: 0.982, 95% confidence interval: 0.966-0.998, P = 0.032) were independently associated with worse preference-based status. Glaucoma patients with age-related macular degeneration found using stairs, walking on uneven ground and judging distances of foot to step/curb significantly more difficult than those without age-related macular degeneration. Vision-related activity limitation and preference-based status are negatively impacted by severe glaucoma and age-related macular degeneration. Patients with both conditions perceive increased difficulty walking safely compared with patients with glaucoma alone. © 2015 Royal Australian and New Zealand College of Ophthalmologists.

  20. Multi-channel automotive night vision system

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  1. Comparison of vision through surface modulated and spatial light modulated multifocal optics.

    PubMed

    Vinas, Maria; Dorronsoro, Carlos; Radhakrishnan, Aiswaryah; Benedi-Garcia, Clara; LaVilla, Edward Anthony; Schwiegerling, Jim; Marcos, Susana

    2017-04-01

    Spatial-light-modulators (SLM) are increasingly used as active elements in adaptive optics (AO) systems to simulate optical corrections, in particular multifocal presbyopic corrections. In this study, we compared vision with lathe-manufactured multi-zone (2-4) multifocal, angularly and radially, segmented surfaces and through the same corrections simulated with a SLM in a custom-developed two-active-element AO visual simulator. We found that perceived visual quality measured through real manufactured surfaces and SLM-simulated phase maps corresponded highly. Optical simulations predicted differences in perceived visual quality across different designs at Far distance, but showed some discrepancies at intermediate and near.

  2. Comparison of vision through surface modulated and spatial light modulated multifocal optics

    PubMed Central

    Vinas, Maria; Dorronsoro, Carlos; Radhakrishnan, Aiswaryah; Benedi-Garcia, Clara; LaVilla, Edward Anthony; Schwiegerling, Jim; Marcos, Susana

    2017-01-01

    Spatial-light-modulators (SLM) are increasingly used as active elements in adaptive optics (AO) systems to simulate optical corrections, in particular multifocal presbyopic corrections. In this study, we compared vision with lathe-manufactured multi-zone (2-4) multifocal, angularly and radially, segmented surfaces and through the same corrections simulated with a SLM in a custom-developed two-active-element AO visual simulator. We found that perceived visual quality measured through real manufactured surfaces and SLM-simulated phase maps corresponded highly. Optical simulations predicted differences in perceived visual quality across different designs at Far distance, but showed some discrepancies at intermediate and near. PMID:28736655

  3. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  4. Functional improvements following the use of the NVT Vision Rehabilitation program for patients with hemianopia following stroke.

    PubMed

    Hayes, Allison; Chen, Celia S; Clarke, Gayle; Thompson, Annette

    2012-01-01

    The incidence of visual deficits following stroke ranges from 20%-68% and has significant impact on activities of daily living. The NVT system is a compensatory visual scanning training program that consists of combined static and mobility training and transfer to activities of daily living. The study aims to evaluate functional changes following the NVT program for people who have homonymous hemianopia (HH) following stroke. Interventional case series of 13 consecutive participants with HH undergoing NVT vision rehabilitation. The primary outcome measure was the number of targets missed on a standardized Mobility Assessment Course (MAC). Other outcome measures included assessment of visual scanning, vision specific Quality of Life questionnaires and reading performance. The average number of targets (sd) missed on the MAC course was 39.6 ± 20.9% before intervention, 27.5 ± 16.3% immediately post intervention and 20.8 ± 15.5% at 3 months post rehabilitation. The study showed a statistically significant trend in improvement in mobility related subscales of National Eye Institute Visual Function Questionnaire-NEI VFQ-25 (p=0.003) and the Veteran Affairs Low Vision Visual Function Questionnaire-VA LVFQ-48 (p=0.036) at 3 months post rehabilitation. The NVT intervention resulted in functional improvements in mobility post rehabilitation. The NVT training showed improvement in vision specific quality of life. There is a need for standardised vision therapy intervention, in conjunction with existing rehabilitation services, for patients with stroke and traumatic brain injury.

  5. Machine Vision Within The Framework Of Collective Neural Assemblies

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1990-03-01

    The proposed mechanism for designing a robust machine vision system is based on the dynamic activity generated by the various neural populations embedded in nervous tissue. It is postulated that a hierarchy of anatomically distinct tissue regions are involved in visual sensory information processing. Each region may be represented as a planar sheet of densely interconnected neural circuits. Spatially localized aggregates of these circuits represent collective neural assemblies. Four dynamically coupled neural populations are assumed to exist within each assembly. In this paper we present a state-variable model for a tissue sheet derived from empirical studies of population dynamics. Each population is modelled as a nonlinear second-order system. It is possible to emulate certain observed physiological and psychophysiological phenomena of biological vision by properly programming the interconnective gains . Important early visual phenomena such as temporal and spatial noise insensitivity, contrast sensitivity and edge enhancement will be discussed for a one-dimensional tissue model.

  6. Biological Basis For Computer Vision: Some Perspectives

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.

    1990-03-01

    Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.

  7. Research Opportunities Supporting the Vision for Space Exploration from the Transformation of the Former Microgravity Materials Science Program

    NASA Technical Reports Server (NTRS)

    Clinton, R. G., Jr.; Szofran, Frank; Bassler, Julie A.; Schlagheck, Ronald A.; Cook, Mary Beth

    2005-01-01

    The Microgravity Materials Science Program established a strong research capability through partnerships between NASA and the scientific research community. With the announcement of the vision for space exploration, additional emphasis in strategic materials science areas was necessary. The President's Commission recognized that achieving its exploration objectives would require significant technical innovation, research, and development in focal areas defined as "enabling technologies." Among the 17 enabling technologies identified for initial focus were: advanced structures, advanced power and propulsion; closed-loop life support and habitability; extravehicular activity systems; autonomous systems and robotics; scientific data collection and analysis, biomedical risk mitigation; and planetary in situ resource utilization. Mission success may depend upon use of local resources to fabricate a replacement part to repair a critical system. Future propulsion systems will require materials with a wide range of mechanical, thermophysical, and thermochemical properties, many of them well beyond capabilities of today's materials systems. Materials challenges have also been identified by experts working to develop advanced life support systems. In responding to the vision for space exploration, the Microgravity Materials Science Program aggressively transformed its research portfolio and focused materials science areas of emphasis to include space radiation shielding; in situ fabrication and repair for life support systems; in situ resource utilization for life support consumables; and advanced materials for exploration, including materials science for space propulsion systems and for life support systems. The purpose of this paper is to inform the scientific community of these new research directions and opportunities to utilize their materials science expertise and capabilities to support the vision for space exploration.

  8. Is My World Getting Smaller? The Challenges of Living with Vision Loss

    ERIC Educational Resources Information Center

    Berger, Sue

    2012-01-01

    Introduction: Vision loss influences both basic and instrumental activities of daily living. There is limited information, however, on the relationship between vision loss and leisure activities. The research presented here was part of a larger study that aimed to understand the importance of participation in leisure activities for those with…

  9. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  10. Progress in computer vision.

    NASA Astrophysics Data System (ADS)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  11. Development of a micromachined epiretinal vision prosthesis

    NASA Astrophysics Data System (ADS)

    Stieglitz, Thomas

    2009-12-01

    Microsystems engineering offers the tools to develop highly sophisticated miniaturized implants to interface with the nervous system. One challenging application field is the development of neural prostheses to restore vision in persons that have become blind by photoreceptor degeneration due to retinitis pigmentosa. The fundamental work that has been done in one approach is presented here. An epiretinal vision prosthesis has been developed that allows hybrid integration of electronics on one part of a thin and flexible substrate. Polyimide as a substrate material is proven to be non-cytotoxic. Non-hermetic encapsulation with parylene C was stable for at least 3 months in vivo. Chronic animal experiments proved spatially selective cortical activation after epiretinal stimulation with a 25-channel implant. Research results have been transferred successfully to companies that currently work on the medical device approval of these retinal vision prostheses in Europe and in the USA.

  12. Vision Based Localization in Urban Environments

    NASA Technical Reports Server (NTRS)

    McHenry, Michael; Cheng, Yang; Matthies, Larry

    2005-01-01

    As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by each camera. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations, For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of two primary components. The first is the vision system which uses binocular stereo ranging together with a set of heuristics to identify features likely to be part of building exteriors and to compute an estimate of the robot's motion since the previous step. The resulting visual features and the associated range measurements are software component, a particle-filter based localization system. This system uses the map and the then fed to the second primary most recent results from the vision system to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and will include the results of applying the system to the global localization of a robot over an approximately half-kilometer traverse across JPL'S Pasadena campus.

  13. Hybrid vision activities at NASA Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1990-01-01

    NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.

  14. Going Below Minimums: The Efficacy of Display Enhanced/Synthetic Vision Fusion for Go-Around Decisions during Non-Normal Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.

    2007-01-01

    The use of enhanced vision systems in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting approach and landing operations. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved enhanced flight vision system that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of synthetic vision systems and enhanced vision system technologies, focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under these newly adopted rules. Experimental results specific to flight crew response to non-normal events using the fused synthetic/enhanced vision system are presented.

  15. Use of a sensitive EnVision +-based detection system for Western blotting: avoidance of streptavidin binding to endogenous biotin and biotin-containing proteins in kidney and other tissues.

    PubMed

    Banks, Rosamonde E; Craven, Rachel A; Harnden, Patricia A; Selby, Peter J

    2003-04-01

    Western blotting remains a central technique in confirming identities of proteins, their quantitation and analysis of various isoforms. The biotin-avidin/streptavidin system is often used as an amplification step to increase sensitivity but in some tissues such as kidney, "nonspecific" interactions may be a problem due to high levels of endogenous biotin-containing proteins. The EnVision system, developed for immunohistochemical applications, relies on binding of a polymeric conjugate consisting of up to 100 peroxidase molecules and 20 secondary antibody molecules linked directly to an activated dextran backbone, to the primary antibody. This study demonstrates that it is also a viable and sensitive alternative detection system in Western blotting applications.

  16. Real-time object tracking based on scale-invariant features employing bio-inspired hardware.

    PubMed

    Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya

    2016-09-01

    We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. VLSI chips for vision-based vehicle guidance

    NASA Astrophysics Data System (ADS)

    Masaki, Ichiro

    1994-02-01

    Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.

  18. Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System

    PubMed Central

    Ajina, Sara; Bridge, Holly

    2017-01-01

    Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system. PMID:27777337

  19. Retinal transcriptome sequencing sheds light on the adaptation to nocturnal and diurnal lifestyles in raptors.

    PubMed

    Wu, Yonghua; Hadly, Elizabeth A; Teng, Wenjia; Hao, Yuyang; Liang, Wei; Liu, Yu; Wang, Haitao

    2016-09-20

    Owls (Strigiformes) represent a fascinating group of birds that are the ecological night-time counterparts to diurnal raptors (Accipitriformes). The nocturnality of owls, unusual within birds, has favored an exceptional visual system that is highly tuned for hunting at night, yet the molecular basis for this adaptation is lacking. Here, using a comparative evolutionary analysis of 120 vision genes obtained by retinal transcriptome sequencing, we found strong positive selection for low-light vision genes in owls, which contributes to their remarkable nocturnal vision. Not surprisingly, we detected gene loss of the violet/ultraviolet-sensitive opsin (SWS1) in all owls we studied, but two other color vision genes, the red-sensitive LWS and the blue-sensitive SWS2, were found to be under strong positive selection, which may be linked to the spectral tunings of these genes toward maximizing photon absorption in crepuscular conditions. We also detected the only other positively selected genes associated with motion detection in falcons and positively selected genes associated with bright-light vision and eye protection in other diurnal raptors (Accipitriformes). Our results suggest the adaptive evolution of vision genes reflect differentiated activity time and distinct hunting behaviors.

  20. Retinal transcriptome sequencing sheds light on the adaptation to nocturnal and diurnal lifestyles in raptors

    PubMed Central

    Wu, Yonghua; Hadly, Elizabeth A.; Teng, Wenjia; Hao, Yuyang; Liang, Wei; Liu, Yu; Wang, Haitao

    2016-01-01

    Owls (Strigiformes) represent a fascinating group of birds that are the ecological night-time counterparts to diurnal raptors (Accipitriformes). The nocturnality of owls, unusual within birds, has favored an exceptional visual system that is highly tuned for hunting at night, yet the molecular basis for this adaptation is lacking. Here, using a comparative evolutionary analysis of 120 vision genes obtained by retinal transcriptome sequencing, we found strong positive selection for low-light vision genes in owls, which contributes to their remarkable nocturnal vision. Not surprisingly, we detected gene loss of the violet/ultraviolet-sensitive opsin (SWS1) in all owls we studied, but two other color vision genes, the red-sensitive LWS and the blue-sensitive SWS2, were found to be under strong positive selection, which may be linked to the spectral tunings of these genes toward maximizing photon absorption in crepuscular conditions. We also detected the only other positively selected genes associated with motion detection in falcons and positively selected genes associated with bright-light vision and eye protection in other diurnal raptors (Accipitriformes). Our results suggest the adaptive evolution of vision genes reflect differentiated activity time and distinct hunting behaviors. PMID:27645106

  1. Human Factors Engineering as a System in the Vision for Exploration

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Smith, Danielle; Holden, Kritina

    2006-01-01

    In order to accomplish NASA's Vision for Exploration, while assuring crew safety and productivity, human performance issues must be well integrated into system design from mission conception. To that end, a two-year Technology Development Project (TDP) was funded by NASA Headquarters to develop a systematic method for including the human as a system in NASA's Vision for Exploration. The specific goals of this project are to review current Human Systems Integration (HSI) standards (i.e., industry, military, NASA) and tailor them to selected NASA Exploration activities. Once the methods are proven in the selected domains, a plan will be developed to expand the effort to a wider scope of Exploration activities. The methods will be documented for inclusion in NASA-specific documents (such as the Human Systems Integration Standards, NASA-STD-3000) to be used in future space systems. The current project builds on a previous TDP dealing with Human Factors Engineering processes. That project identified the key phases of the current NASA design lifecycle, and outlined the recommended HFE activities that should be incorporated at each phase. The project also resulted in a prototype of a webbased HFE process tool that could be used to support an ideal HFE development process at NASA. This will help to augment the limited human factors resources available by providing a web-based tool that explains the importance of human factors, teaches a recommended process, and then provides the instructions, templates and examples to carry out the process steps. The HFE activities identified by the previous TDP are being tested in situ for the current effort through support to a specific NASA Exploration activity. Currently, HFE personnel are working with systems engineering personnel to identify HSI impacts for lunar exploration by facilitating the generation of systemlevel Concepts of Operations (ConOps). For example, medical operations scenarios have been generated for lunar habitation in order to identify HSI requirements for the lunar communications architecture. Throughout these ConOps exercises, HFE personnel are testing various tools and methodologies that have been identified in the literature. A key part of the effort is the identification of optimal processes, methods, and tools for these early development phase activities, such as ConOps, requirements development, and early conceptual design. An overview of the activities completed thus far, as well as the tools and methods investigated will be presented.

  2. Testing and evaluation of a wearable augmented reality system for natural outdoor environments

    NASA Astrophysics Data System (ADS)

    Roberts, David; Menozzi, Alberico; Cook, James; Sherrill, Todd; Snarski, Stephen; Russler, Pat; Clipp, Brian; Karl, Robert; Wenger, Eric; Bennett, Matthew; Mauger, Jennifer; Church, William; Towles, Herman; MacCabe, Stephen; Webb, Jeffrey; Lupo, Jasper; Frahm, Jan-Michael; Dunn, Enrique; Leslie, Christopher; Welch, Greg

    2013-05-01

    This paper describes performance evaluation of a wearable augmented reality system for natural outdoor environments. Applied Research Associates (ARA), as prime integrator on the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program, is developing a soldier-worn system to provide intuitive `heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a novel pose estimation capability, a helmet-mounted see-through display, and a wearable processing unit to accurately overlay geo-registered iconography (e.g., navigation waypoints, sensor points of interest, blue forces, aircraft) on the soldier's view of reality. We achieve accurate pose estimation through fusion of inertial, magnetic, GPS, terrain data, and computer-vision inputs. We leverage a helmet-mounted camera and custom computer vision algorithms to provide terrain-based measurements of absolute orientation (i.e., orientation of the helmet with respect to the earth). These orientation measurements, which leverage mountainous terrain horizon geometry and mission planning landmarks, enable our system to operate robustly in the presence of external and body-worn magnetic disturbances. Current field testing activities across a variety of mountainous environments indicate that we can achieve high icon geo-registration accuracy (<10mrad) using these vision-based methods.

  3. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  4. Intelligent surgical laser system configuration and software implementation

    NASA Astrophysics Data System (ADS)

    Hsueh, Chi-Fu T.; Bille, Josef F.

    1992-06-01

    An intelligent surgical laser system, which can help the ophthalmologist to achieve higher precision and control during their procedures, has been developed by ISL as model CLS 4001. In addition to the laser and laser delivery system, the system is also equipped with a vision system (IPU), robotics motion control (MCU), and a tracking closed loop system (ETS) that tracks the eye in three dimensions (X, Y and Z). The initial patient setup is computer controlled with guidance from the vision system. The tracking system is automatically engaged when the target is in position. A multi-level tracking system is developed by integrating our vision and tracking systems which have been able to maintain our laser beam precisely on target. The capabilities of the automatic eye setup and the tracking in three dimensions provides for improved accuracy and measurement repeatability. The system is operated through the Surgical Control Unit (SCU). The SCU communicates with the IPU and the MCU through both ethernet and RS232. Various scanning pattern (i.e., line, curve, circle, spiral, etc.) can be selected with given parameters. When a warning is activated, a voice message is played that will normally require a panel touch acknowledgement. The reliability of the system is ensured in three levels: (1) hardware, (2) software real time monitoring, and (3) user. The system is currently under clinical validation.

  5. Effects of visibility and types of the ground surface on the muscle activities of the vastus medialis oblique and vastus lateralis

    PubMed Central

    Park, Jeong-ki; Lee, Dong-yeop; Kim, Jin-Seop; Hong, Ji-Heon; You, Jae-Ho; Park, In-mo

    2015-01-01

    [Purpose] The purpose of this study was to compare the effects of visibility and types of ground surface (stable and unstable) during the performance of squats on the muscle activities of the vastus medialis oblique (VMO) and vastus lateralis (VL). [Subjects and Methods] The subjects were 25 healthy adults in their 20s. They performed squats under four conditions: stable ground surface (SGS) with vision-allowed; unstable ground surface (UGS) with vision-allowed; SGS with vision-blocked; and UGS with vision-blocked. The different conditions were performed on different days. Surface electromyogram (EMG) values were recorded. [Results] The most significant difference in the activity of the VMO and VL was observed when the subjects performed squats on the UGS, with their vision blocked. [Conclusion] For the selective activation of the VMO, performing squats on an UGS was effective, and it was more effective when subjects’ vision was blocked. PMID:26356407

  6. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  7. A Framework for People Re-Identification in Multi-Camera Surveillance Systems

    ERIC Educational Resources Information Center

    Ammar, Sirine; Zaghden, Nizar; Neji, Mahmoud

    2017-01-01

    People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance system with disjoint cameras. This paper is focused on the implementation of a human re-identification system. First the face of detected people is divided into three parts and some soft-biometric traits are…

  8. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  9. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  10. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  11. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  12. Head mounted DMD based projection system for natural and prosthetic visual stimulation in freely moving rats.

    PubMed

    Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi

    2016-10-12

    Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal's retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals.

  13. Head mounted DMD based projection system for natural and prosthetic visual stimulation in freely moving rats

    PubMed Central

    Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi

    2016-01-01

    Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal’s retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals. PMID:27731346

  14. Head mounted DMD based projection system for natural and prosthetic visual stimulation in freely moving rats

    NASA Astrophysics Data System (ADS)

    Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi

    2016-10-01

    Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal’s retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals.

  15. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  16. Flight Test Evaluation of Situation Awareness Benefits of Integrated Synthetic Vision System Technology f or Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III

    2005-01-01

    Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.

  17. Visioning in the brain: an fMRI study of inspirational coaching and mentoring.

    PubMed

    Jack, Anthony I; Boyatzis, Richard E; Khawaja, Masud S; Passarelli, Angela M; Leckie, Regina L

    2013-01-01

    Effective coaching and mentoring is crucial to the success of individuals and organizations, yet relatively little is known about its neural underpinnings. Coaching and mentoring to the Positive Emotional Attractor (PEA) emphasizes compassion for the individual's hopes and dreams and has been shown to enhance a behavioral change. In contrast, coaching to the Negative Emotional Attractor (NEA), by focusing on externally defined criteria for success and the individual's weaknesses in relation to them, does not show sustained change. We used fMRI to measure BOLD responses associated with these two coaching styles. We hypothesized that PEA coaching would be associated with increased global visual processing and with engagement of the parasympathetic nervous system (PNS), while the NEA coaching would involve greater engagement of the sympathetic nervous system (SNS). Regions showing more activity in PEA conditions included the lateral occipital cortex, superior temporal cortex, medial parietal, subgenual cingulate, nucleus accumbens, and left lateral prefrontal cortex. We relate these activations to visioning, PNS activity, and positive affect. Regions showing more activity in NEA conditions included medial prefrontal regions and right lateral prefrontal cortex. We relate these activations to SNS activity, self-trait attribution and negative affect.

  18. ROVER: A prototype active vision system

    NASA Astrophysics Data System (ADS)

    Coombs, David J.; Marsh, Brian D.

    1987-08-01

    The Roving Eyes project is an experiment in active vision. We present the design and implementation of a prototype that tracks colored balls in images from an on-line charge coupled device (CCD) camera. Rover is designed to keep up with its rapidly changing environment by handling best and average case conditions and ignoring the worst case. This allows Rover's techniques to be less sophisticated and consequently faster. Each of Rover's major functional units is relatively isolated from the others, and an executive which knows all the functional units directs the computation by deciding which jobs would be most effective to run. This organization is realized with a priority queue of jobs and their arguments. Rover's structure not only allows it to adapt its strategy to the environment, but also makes the system extensible. A capability can be added to the system by adding a functional module with a well defined interface and by modifying the executive to make use of the new module. The current implementation is discussed in the appendices.

  19. Mapping Research Activities and Technologies for Sustainability and Environmental Studies--A Case Study at University Level

    ERIC Educational Resources Information Center

    Hara, Keishiro; Uwasu, Michinori; Kurimoto, Shuji; Yamanaka, Shinsuke; Umeda, Yasushi; Shimoda, Yoshiyuki

    2013-01-01

    Systemic understanding of potential research activities and available technology seeds at university level is an essential condition to promote interdisciplinary and vision-driven collaboration in an attempt to cope with complex sustainability and environmental problems. Nonetheless, any such practices have been hardly conducted at universities…

  20. Biomechanical versus Inertial Information: Stable Individual Differences in Perception of Self-Rotation

    ERIC Educational Resources Information Center

    Bruggeman, Hugo; Piuneu, Vadzim S.; Rieser, John J.; Pick, Herbert L., Jr.

    2009-01-01

    When turning without vision or audition, people tend to perceive their locomotion as a change in heading relative to objects in the remembered surroundings. Such perception of self-rotation depends on sensitivity to information for movement from biomechanical activity of the locomotor system or from inertial activation of the vestibular and…

  1. An active role for machine learning in drug development

    PubMed Central

    Murphy, Robert F.

    2014-01-01

    Due to the complexity of biological systems, cutting-edge machine-learning methods will be critical for future drug development. In particular, machine-vision methods to extract detailed information from imaging assays and active-learning methods to guide experimentation will be required to overcome the dimensionality problem in drug development. PMID:21587249

  2. Latency Requirements for Head-Worn Display S/EVS Applications

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Trey Arthur, J. J., III; Williams, Steven P.

    2004-01-01

    NASA s Aviation Safety Program, Synthetic Vision Systems Project is conducting research in advanced flight deck concepts, such as Synthetic/Enhanced Vision Systems (S/EVS), for commercial and business aircraft. An emerging thrust in this activity is the development of spatially-integrated, large field-of-regard information display systems. Head-worn or helmet-mounted display systems are being proposed as one method in which to meet this objective. System delays or latencies inherent to spatially-integrated, head-worn displays critically influence the display utility, usability, and acceptability. Research results from three different, yet similar technical areas flight control, flight simulation, and virtual reality are collectively assembled in this paper to create a global perspective of delay or latency effects in head-worn or helmet-mounted display systems. Consistent definitions and measurement techniques are proposed herein for universal application and latency requirements for Head-Worn Display S/EVS applications are drafted. Future research areas are defined.

  3. Latency requirements for head-worn display S/EVS applications

    NASA Astrophysics Data System (ADS)

    Bailey, Randall E.; Arthur, Jarvis J., III; Williams, Steven P.

    2004-08-01

    NASA's Aviation Safety Program, Synthetic Vision Systems Project is conducting research in advanced flight deck concepts, such as Synthetic/Enhanced Vision Systems (S/EVS), for commercial and business aircraft. An emerging thrust in this activity is the development of spatially-integrated, large field-of-regard information display systems. Head-worn or helmet-mounted display systems are being proposed as one method in which to meet this objective. System delays or latencies inherent to spatially-integrated, head-worn displays critically influence the display utility, usability, and acceptability. Research results from three different, yet similar technical areas - flight control, flight simulation, and virtual reality - are collectively assembled in this paper to create a global perspective of delay or latency effects in head-worn or helmet-mounted display systems. Consistent definitions and measurement techniques are proposed herein for universal application and latency requirements for Head-Worn Display S/EVS applications are drafted. Future research areas are defined.

  4. Discovery regarding visual neuron adaptation applicable to robot use

    NASA Astrophysics Data System (ADS)

    Korepanov, S.

    1985-06-01

    Scientists of the USSR Academy of Sciences' Institute of Higher Nervous Activity and Neurophysiology discovered a mechanism of light adaptation by organs of vision to changes in the brightness of light. Studies of the reaction of the visual center of the cerebral cortex showed that neurons in it are arranged in different ways: some, which are call classic neurons, have a fairly stable spatial orientation, while that of others is variable. It was found that vision operates chiefly on the basis of classic neurons in all conditions of illumination. Neurons of the second type are activated during sharp fluctuations of illumination. These neurons momentarily assume the orientation of the classic ones, thus serving as a kind of back-up for the primary system of the brain's visual center. Results of these studies will aid medical specialists in their practical work, as well as developers of image-recognition systems for new-generation robots.

  5. MMW radar enhanced vision systems: the Helicopter Autonomous Landing System (HALS) and Radar-Enhanced Vision System (REVS) are rotary and fixed wing enhanced flight vision systems that enable safe flight operations in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Cross, Jack; Schneider, John; Cariani, Pete

    2013-05-01

    Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.

  6. Evaluation of 5 different labeled polymer immunohistochemical detection systems.

    PubMed

    Skaland, Ivar; Nordhus, Marit; Gudlaugsson, Einar; Klos, Jan; Kjellevold, Kjell H; Janssen, Emiel A M; Baak, Jan P A

    2010-01-01

    Immunohistochemical staining is important for diagnosis and therapeutic decision making but the results may vary when different detection systems are used. To analyze this, 5 different labeled polymer immunohistochemical detection systems, REAL EnVision, EnVision Flex, EnVision Flex+ (Dako, Glostrup, Denmark), NovoLink (Novocastra Laboratories Ltd, Newcastle Upon Tyne, UK) and UltraVision ONE (Thermo Fisher Scientific, Fremont, CA) were tested using 12 different, widely used mouse and rabbit primary antibodies, detecting nuclear, cytoplasmic, and membrane antigens. Serial sections of multitissue blocks containing 4% formaldehyde fixed paraffin embedded material were selected for their weak, moderate, and strong staining for each antibody. Specificity and sensitivity were evaluated by subjective scoring and digital image analysis. At optimal primary antibody dilution, digital image analysis showed that EnVision Flex+ was the most sensitive system (P < 0.005), with means of 8.3, 13.4, 20.2, and 41.8 gray scale values stronger staining than REAL EnVision, EnVision Flex, NovoLink, and UltraVision ONE, respectively. NovoLink was the second most sensitive system for mouse antibodies, but showed low sensitivity for rabbit antibodies. Due to low sensitivity, 2 cases with UltraVision ONE and 1 case with NovoLink stained false negatively. None of the detection systems showed any distinct false positivity, but UltraVision ONE and NovoLink consistently showed weak background staining both in negative controls and at optimal primary antibody dilution. We conclude that there are significant differences in sensitivity, specificity, costs, and total assay time in the immunohistochemical detection systems currently in use.

  7. Real Time Target Tracking Using Dedicated Vision Hardware

    NASA Astrophysics Data System (ADS)

    Kambies, Keith; Walsh, Peter

    1988-03-01

    This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.

  8. Final Report for Geometric Observers and Particle Filtering for Controlled Active Vision

    DTIC Science & Technology

    2016-12-15

    code) 15-12-2016 Final Report 01Sep06 - 09May11 Final Report for Geometric Observers & Particle Filtering for Controlled Active Vision 49414-NS.1Allen...Observers and Particle Filtering for Controlled Active Vision by Allen R. Tannenbaum School of Electrical and Computer Engineering Georgia Institute of...7 2.2.4 Conformal Area Minimizing Flows . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Particle Filters

  9. The Effect of Gender and Level of Vision on the Physical Activity Level of Children and Adolescents with Visual Impairment

    ERIC Educational Resources Information Center

    Aslan, Ummuhan Bas; Calik, Bilge Basakci; Kitis, Ali

    2012-01-01

    This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between…

  10. Stereo and photometric image sequence interpretation for detecting negative obstacles using active gaze control and performing an autonomous jink

    NASA Astrophysics Data System (ADS)

    Hofmann, Ulrich; Siedersberger, Karl-Heinz

    2003-09-01

    Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates. In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle. For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.

  11. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  12. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    PubMed

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Active-Vision Control Systems for Complex Adversarial 3-D Environments

    DTIC Science & Technology

    2009-03-01

    Control Systems MURI Final Report 36 51. D. Nain, S. Haker , A. Bobick, A. Tannenbaum, "Multiscale 3D shape representation and segmentation using...Conference, August 2008. 99. L. Zhu, Y. Yang, S. Haker , and A. Tannenbaum, "An image morphing technique based on optimal mass preserving mapping," IEEE

  14. EVA Communications Avionics and Informatics

    NASA Technical Reports Server (NTRS)

    Carek, David Andrew

    2005-01-01

    The Glenn Research Center is investigating and developing technologies for communications, avionics, and information systems that will significantly enhance extra vehicular activity capabilities to support the Vision for Space Exploration. Several of the ongoing research and development efforts are described within this presentation including system requirements formulation, technology development efforts, trade studies, and operational concept demonstrations.

  15. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  16. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  17. Reading performance with low-vision aids and vision-related quality of life after macular translocation surgery in patients with age-related macular degeneration.

    PubMed

    Nguyen, Nhung X; Besch, Dorothea; Bartz-Schmidt, Karl; Gelisken, Faik; Trauzettel-Klosinski, Susanne

    2007-12-01

    The aim of the present study was to evaluate the power of magnification required, reading performance with low-vision aids and vision-related quality of life with reference to reading ability and ability to carry out day-to-day activities in patients after macular translocation. This study included 15 patients who had undergone macular translocation with 360-degree peripheral retinectomy. The mean length of follow-up was 19.2 +/- 10.8 months (median 11 months). At the final examination, the impact of visual impairment on reading ability and quality of life was assessed according to a modified 9-item questionnaire in conjunction with a comprehensive clinical examination, which included assessment of best corrected visual acuity (BCVA), the magnification power required for reading, use of low-vision aids and reading speed. Patients rated the extent to which low vision restricted their ability to read and participate in other activities that affect quality of life. Responses were scored on a scale of 1.0 (optimum self-evaluation) to 5.0 (very poor). In the operated eye, overall mean postoperative BCVA (distance) was not significantly better than mean preoperative BCVA (0.11 +/- 0.06 and 0.15 +/- 0.08, respectively; p = 0.53). However, 53% of patients reported a subjective increase in visual function after treatment. At the final visit, the mean magnification required was x 7.7 +/- 6.7. A total of 60% of patients needed optical magnifiers for reading and in 40% of patients closed-circuit TV systems were necessary. All patients were able to read newspaper print using adapted low-vision aids at a mean reading speed of 71 +/- 40 words per minute. Mean self-reported scores were 3.2 +/- 1.1 for reading, 2.5 +/- 0.7 for day-to-day activities and 2.7 +/- 3.0 for outdoor walking and using steps or stairs. Patients' levels of dependency were significantly correlated with scores for reading (p = 0.01), day-to-day activities (p < 0.001) and outdoor walking and using steps (p = 0.001). The evaluation of self-reported visual function and vision-related quality of life in patients after macular translocation is necessary to obtain detailed information on treatment effects. Our results indicated improvement in patients' subjective evaluations of visual function, without significant improvement in visual acuity. The postoperative clinical benefits of treatment coincide with subjective benefits in terms of reading ability, quality of life and patient satisfaction. Our study confirms the importance and efficiency of visual rehabilitation with aids for low vision after surgery.

  18. Monovision techniques for telerobots

    NASA Technical Reports Server (NTRS)

    Goode, P. W.; Carnils, K.

    1987-01-01

    The primary task of the vision sensor in a telerobotic system is to provide information about the position of the system's effector relative to objects of interest in its environment. The subtasks required to perform the primary task include image segmentation, object recognition, and object location and orientation in some coordinate system. The accomplishment of the vision task requires the appropriate processing tools and the system methodology to effectively apply the tools to the subtasks. The functional structure of the telerobotic vision system used in the Langley Research Center's Intelligent Systems Research Laboratory is discussed as well as two monovision techniques for accomplishing the vision subtasks.

  19. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  20. Dissemination of health technology assessments: identifying the visions guiding an evolving policy innovation in Canada.

    PubMed

    Lehoux, Pascale; Denis, Jean-Louis; Tailliez, Stéphanie; Hivon, Myriam

    2005-08-01

    Health technology assessment (HTA) has received increasing support over the past twenty years in both North America and Europe. The justification for this field of policy-oriented research is that evidence about the efficacy, safety, and cost-effectiveness of technology should contribute to decision and policy making. However, concerns about the ability of HTA producers to increase the use of their findings by decision makers have been expressed. Although HTA practitioners have recognized that dissemination activities need to be intensified, why and how particular approaches should be adopted is still under debate. Using an institutional theory perspective, this article examines HTA as a means of implementing knowledge-based change within health care systems. It presents the results of a case study on the dissemination strategies of six Canadian HTA agencies. Chief executive officers and executives (n = 11), evaluators (n = 19), and communications staff (n = 10) from these agencies were interviewed. Our results indicate that the target audience of HTA is frequently limited to policy makers, that three conflicting visions of HTA dissemination coexist, that active dissemination strategies have only occasionally been applied, and that little attention has been paid to the management of diverging views about the value of health technology. Our discussion explores the strengths, limitations, and trade-offs associated with the three visions. Further efforts should be deployed within agencies to better articulate a shared vision and to devise dissemination strategies that are consistent with this vision.

  1. Ranibizumab for the treatment of neovascular AMD.

    PubMed

    Kaiser, P K; Do, D V

    2007-03-01

    Age-related macular degeneration (AMD) is the leading cause of adult blindness among individuals aged 50 and older in the Western world, with the neovascular form of AMD responsible for the most severe and rapid visual loss. Although monotherapy with currently available treatments can slow the rate of loss of vision in eyes with neovascular AMD, they do not significantly improve vision. Vascular endothelial growth factor-A (VEGF-A) plays a critical role in the pathogenesis of neovascular AMD, and ranibizumab is a promising new treatment that targets all VEGF-A isoforms and their biologically active degradation products. Clinical trials have reported that ranibizumab treatment resulted in greater proportions of patients achieving a < 15 letter loss of visual acuity and improved vision at 12 and 24 months than control groups. The incidence of serious ocular and systemic adverse events was low in all ranibizumab trials to date. Currently, ranibizumab is the only treatment for neovascular AMD to demonstrate significant improvement in vision for many patients and represents a major advance in treating neovascular AMD.

  2. Design of an efficient framework for fast prototyping of customized human-computer interfaces and virtual environments for rehabilitation.

    PubMed

    Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe

    2013-06-01

    Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Modeling the target acquisition performance of active imaging systems

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Jacobs, Eddie L.; Halford, Carl E.; Vollmerhausen, Richard; Tofsted, David H.

    2007-04-01

    Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.

  4. Modeling the target acquisition performance of active imaging systems.

    PubMed

    Espinola, Richard L; Jacobs, Eddie L; Halford, Carl E; Vollmerhausen, Richard; Tofsted, David H

    2007-04-02

    Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.

  5. Eyes Wide Shut: the impact of dim-light vision on neural investment in marine teleosts.

    PubMed

    Iglesias, Teresa L; Dornburg, Alex; Warren, Dan L; Wainwright, Peter C; Schmitz, Lars; Economo, Evan P

    2018-05-28

    Understanding how organismal design evolves in response to environmental challenges is a central goal of evolutionary biology. In particular, assessing the extent to which environmental requirements drive general design features among distantly related groups is a major research question. The visual system is a critical sensory apparatus that evolves in response to changing light regimes. In vertebrates, the optic tectum is the primary visual processing centre of the brain and yet it is unclear how or whether this structure evolves while lineages adapt to changes in photic environment. On one hand, dim-light adaptation is associated with larger eyes and enhanced light-gathering power that could require larger information processing capacity. On the other hand, dim-light vision may evolve to maximize light sensitivity at the cost of acuity and colour sensitivity, which could require less processing power. Here, we use X-ray microtomography and phylogenetic comparative methods to examine the relationships between diel activity pattern, optic morphology, trophic guild and investment in the optic tectum across the largest radiation of vertebrates-teleost fishes. We find that despite driving the evolution of larger eyes, enhancement of the capacity for dim-light vision generally is accompanied by a decrease in investment in the optic tectum. These findings underscore the importance of considering diel activity patterns in comparative studies and demonstrate how vision plays a role in brain evolution, illuminating common design principles of the vertebrate visual system. © 2018 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2018 European Society For Evolutionary Biology.

  6. Relationship between functional vision and balance and mobility performance in community-dwelling older adults.

    PubMed

    Aartolahti, Eeva; Häkkinen, Arja; Lönnroos, Eija; Kautiainen, Hannu; Sulkava, Raimo; Hartikainen, Sirpa

    2013-10-01

    Vision is an important prerequisite for balance control and mobility. The role of objectively measured visual functions has been previously studied but less is known about associations of functional vision, that refers to self-perceived vision-based ability to perform daily activities. The aim of the study was to investigate the relationship between functional vision and balance and mobility performance in a community-based sample of older adults. This study is part of a Geriatric Multidisciplinary Strategy for the Good Care of the Elderly project (GeMS). Participants (576) aged 76-100 years (mean age 81 years, 70 % women) were interviewed using a seven-item functional vision questionnaire (VF-7). Balance and mobility were measured by the Berg balance scale (BBS), timed up and go (TUG), chair stand test, and maximal walking speed. In addition, self-reported fear of falling, depressive symptoms (15-item Geriatric Depression Scale), cognition (Mini-Mental State Examination) and physical activity (Grimby) were assessed. In the analysis, participants were classified into poor, moderate, or good functional vision groups. The poor functional vision group (n = 95) had more comorbidities, depressed mood, cognition decline, fear of falling, and reduced physical activity compared to participants with moderate (n = 222) or good functional vision (n = 259). Participants with poor functional vision performed worse on all balance and mobility tests. After adjusting for gender, age, chronic conditions, and cognition, the linearity remained statistically significant between functional vision and BBS (p = 0.013), TUG (p = 0.010), and maximal walking speed (p = 0.008), but not between functional vision and chair stand (p = 0.069). Poor functional vision is related to weaker balance and mobility performance in community-dwelling older adults. This highlights the importance of widespread assessment of health, including functional vision, to prevent balance impairment and maintain independent mobility among older population.

  7. Barriers to accessing low vision services.

    PubMed

    Pollard, Tamara L; Simpson, John A; Lamoureux, Ecosse L; Keeffe, Jill E

    2003-07-01

    To investigate barriers to accessing low vision services in Australia. Adults with a vision impairment (<6/12 in the better eye and/or significant visual field defect), who were current patients at the Royal Victorian Eye and Ear Hospital (RVEEH), were interviewed. The questions investigated self-perceived vision difficulties, duration of vision loss and satisfaction with vision and also examined issues of awareness of low vision services and referral to services. Focus groups were also conducted with vision impaired (<6/12 in the better eye) patients from the RVEEH, listeners of the Radio for the Print Handicapped and peer workers at Vision Australia Foundation. The discussions were recorded and transcribed. The questionnaire revealed that referral to low vision services was associated with a greater degree of vision loss (p = 0.002) and a greater self-perception of low vision (p = 0.005) but that referral was not associated with satisfaction (p = 0.144) or difficulties related to vision (p = 0.169). Participants with mild and moderate vision impairment each reported similar levels of difficulties with daily activities and satisfaction with their vision (p > 0.05). However, there was a significant difference in the level of difficulties experienced with daily activities between those with mild-moderate and severe vision impairment (p < 0.05). The participants of the focus groups identified barriers to accessing low vision services related to awareness of services among the general public and eye care professionals, understanding of low vision and the services available, acceptance of low vision, the referral process, and transport. In addition to the expected difficulties with lack of awareness of services by people with low vision, many people do not understand what the services provide and do not identify themselves as having low vision. Knowledge of these barriers, from the perspective of people with low vision, can now be used to guide the development and content of future health-promotion campaigns.

  8. Detecting Motion from a Moving Platform; Phase 3: Unification of Control and Sensing for More Advanced Situational Awareness

    DTIC Science & Technology

    2011-11-01

    RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica

  9. Vehicle-based vision sensors for intelligent highway systems

    NASA Astrophysics Data System (ADS)

    Masaki, Ichiro

    1989-09-01

    This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.

  10. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  11. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  12. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    PubMed

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  13. A Leadership Perspective on a Shared Vision for Healthcare.

    PubMed

    Kitch, Tracy

    2017-01-01

    Our country's recent negotiations for a new Health Accord have shone light on the importance of more accessible and better home care. The direction being taken on health funding investments has sent a strong message about healthcare system redesign. It is time to design a healthcare system that moves us away from a hospital-focused model to one that is more effective, integrated and sustainable and one that places a greater emphasis on primary care, community care and home care. The authors of the lead paper (Sharkey and Lefebre 2017) provide their vision for people-powered care and explore the opportunity for nursing leaders to draw upon the unique expertise and insights of home care nursing as a strategic lever to bring about real health system transformation across all settings. Understanding what really matters at the beginning of the healthcare journey and honouring the tenants of partnership and empowerment as a universal starting point to optimize health outcomes along the continuum of care present a very important opportunity. However, as nursing leaders in the health system change, it is important that we extend the conversation beyond one setting. It is essential that as leaders, we seek to design models of care delivery that achieve a shared vision, focused on seamless coordinated care across the continuum that is person-centred. Bringing about real system change requires us to think differently and consider the role of nursing across all settings, collaboratively co-designing so that our collective skills and knowledge can work within a complementary framework. Focusing our leadership efforts on enhancing integration across healthcare settings will ensure that nurses can be important leaders and active decision-makers in health system change. A shared vision for healthcare requires all of us to look beyond the usual practices and structures, hospitals and institutional walls.

  14. Machine vision systems using machine learning for industrial product inspection

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  15. 1996 Andrew Pattullo lecture. A vision of the role of health administration education in the transformation of the American health system.

    PubMed

    Sigmond, R M

    1997-01-01

    In summary, it is my conviction that each of the AUPHA programs would be well advised to re-discover a shared vision of health care as public service, caring for communities as well as for patients and enrolled populations. I am also convinced that each program should be shaping a shared vision of the role of the academic program in providing intellectual leadership in this respect. These processes can be designed to have impact on all of the activities of the program, starting with low hanging fruit, and moving higher with growing confidence and commitment. The key task for AUPHA as an organization right now is ro re-examine its own vision as a basis for providing strong leadership to the field. This involves promoting visioning as a management tool, helping to sharpen the accreditation requirements in this respect, and carrying out the recommendation of the Pew Health Professions Commission to bring the academic and practitioner worlds into closer synch. The talent and the zeal are evident. What is required now is the will to make changes. Continued transformation of the American Health system and of the academic programs in health administration are both inevitable. Managing the transformation is more exciting, more productive, more professionally satisfying and more fun than just surviving or not surviving at all. Managing a transformation is not easy, especially in academia. Just watching it happen is not nearly as satisfying or as much fun.

  16. A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi

    1997-01-01

    A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.

  17. Dichromatic vision in a fruit bat with diurnal proclivities: the Samoan flying fox (Pteropus samoensis).

    PubMed

    Melin, Amanda D; Danosi, Christina F; McCracken, Gary F; Dominy, Nathaniel J

    2014-12-01

    A nocturnal bottleneck during mammalian evolution left a majority of species with two cone opsins, or dichromatic color vision. Primate trichromatic vision arose from the duplication and divergence of an X-linked opsin gene, and is long attributed to tandem shifts from nocturnality to diurnality and from insectivory to frugivory. Opsin gene variation and at least one duplication event exist in the order Chiroptera, suggesting that trichromatic vision could evolve under favorable ecological conditions. The natural history of the Samoan flying fox (Pteropus samoensis) meets these conditions--it is a large bat that consumes nectar and fruit and demonstrates strong diurnal proclivities. It also possesses a visual system that is strikingly similar to that of primates. To explore the potential for opsin gene duplication and divergence in this species, we sequenced the opsin genes of 11 individuals (19 X-chromosomes) from three South Pacific islands. Our results indicate the uniform presence of two opsins with predicted peak sensitivities of ca. 360 and 553 nm. This result fails to support a causal link between diurnal frugivory and trichromatic vision, although it remains plausible that the diurnal activities of P. samoensis have insufficient antiquity to favor opsin gene renovation.

  18. Visual Sensor Based Abnormal Event Detection with Moving Shadow Removal in Home Healthcare Applications

    PubMed Central

    Lee, Young-Sook; Chung, Wan-Young

    2012-01-01

    Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486

  19. Application of aircraft navigation sensors to enhanced vision systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.

    1993-01-01

    In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.

  20. Artificial Intelligence: Underlying Assumptions and Basic Objectives.

    ERIC Educational Resources Information Center

    Cercone, Nick; McCalla, Gordon

    1984-01-01

    Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…

  1. FLORA™: Phase I development of a functional vision assessment for prosthetic vision users

    PubMed Central

    Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy

    2014-01-01

    Background Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population in order to evaluate the impact of new vision restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional vision ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. Methods The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional vision tasks for observation of performance, and a case narrative summary. Results were analyzed to determine whether the interview questions and functional vision tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, twenty-six subjects were assessed with the FLORA. Seven different evaluators administered the assessment. Results All 14 interview questions were asked. All 35 functional vision tasks were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options -- impossible (33%), difficult (23%), moderate (24%) and easy (19%) -- were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with vision only occurring 75% on average with the System ON, and 29% with the System OFF. Conclusion The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as a functional vision and well-being assessment tool. PMID:25675964

  2. The use of computer vision in an intelligent environment to support aging-in-place, safety, and independence in the home.

    PubMed

    Mihailidis, Alex; Carmichael, Brent; Boger, Jennifer

    2004-09-01

    This paper discusses the use of computer vision in pervasive healthcare systems, specifically in the design of a sensing agent for an intelligent environment that assists older adults with dementia during an activity of daily living. An overview of the techniques applied in this particular example is provided, along with results from preliminary trials completed using the new sensing agent. A discussion of the results obtained to date is presented, including technical and social issues that remain for the advancement and acceptance of this type of technology within pervasive healthcare.

  3. Comparative system identification of flower tracking performance in three hawkmoth species reveals adaptations for dim light vision.

    PubMed

    Stöckl, Anna L; Kihlström, Klara; Chandler, Steven; Sponberg, Simon

    2017-04-05

    Flight control in insects is heavily dependent on vision. Thus, in dim light, the decreased reliability of visual signal detection also prompts consequences for insect flight. We have an emerging understanding of the neural mechanisms that different species employ to adapt the visual system to low light. However, much less explored are comparative analyses of how low light affects the flight behaviour of insect species, and the corresponding links between physiological adaptations and behaviour. We investigated whether the flower tracking behaviour of three hawkmoth species with different diel activity patterns revealed luminance-dependent adaptations, using a system identification approach. We found clear luminance-dependent differences in flower tracking in all three species, which were explained by a simple luminance-dependent delay model, which generalized across species. We discuss physiological and anatomical explanations for the variance in tracking responses, which could not be explained by such simple models. Differences between species could not be explained by the simple delay model. However, in several cases, they could be explained through the addition on a second model parameter, a simple scaling term, that captures the responsiveness of each species to flower movements. Thus, we demonstrate here that much of the variance in the luminance-dependent flower tracking responses of hawkmoths with different diel activity patterns can be captured by simple models of neural processing.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).

  4. Night vision: changing the way we drive

    NASA Astrophysics Data System (ADS)

    Klapper, Stuart H.; Kyle, Robert J. S.; Nicklin, Robert L.; Kormos, Alexander L.

    2001-03-01

    A revolutionary new Night Vision System has been designed to help drivers see well beyond their headlights. From luxury automobiles to heavy trucks, Night Vision is helping drivers see better, see further, and react sooner. This paper describes how Night Vision Systems are being used in transportation and their viability for the future. It describes recent improvements to the system currently in the second year of production. It also addresses consumer education and awareness, cost reduction, product reliability, market expansion and future improvements.

  5. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  6. Machine Vision Systems for Processing Hardwood Lumber and Logs

    Treesearch

    Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline

    1992-01-01

    Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...

  7. Reservoir Maintenance and Development Task Report for the DOE Geothermal Technologies Office GeoVision Study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowry, Thomas Stephen; Finger, John T.; Carrigan, Charles R.

    This report documents the key findings from the Reservoir Maintenance and Development (RM&D) Task of the U.S. Department of Energy's (DOE), Geothermal Technologies Office (GTO) Geothermal Vision Study (GeoVision Study). The GeoVision Study had the objective of conducting analyses of future geothermal growth based on sets of current and future geothermal technology developments. The RM&D Task is one of seven tasks within the GeoVision Study with the others being, Exploration and Confirmation, Potential to Penetration, Institutional Market Barriers, Environmental and Social Impacts, Thermal Applications, and Hybrid Systems. The full set of findings and the details of the GeoVision Study canmore » be found in the final GeoVision Study report on the DOE-GTO website. As applied here, RM&D refers to the activities associated with developing, exploiting, and maintaining a known geothermal resource. It assumes that the site has already been vetted and that the resource has been evaluated to be of sufficient quality to move towards full-scale development. It also assumes that the resource is to be developed for power generation, as opposed to low-temperature or direct use applications. This document presents the key factors influencing RM&D from both a technological and operational standpoint and provides a baseline of its current state. It also looks forward to describe areas of research and development that must be pursued if the development geothermal energy is to reach its full potential.« less

  8. Machine vision system for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.

  9. Vision Algorithms to Determine Shape and Distance for Manipulation of Unmodeled Objects

    NASA Technical Reports Server (NTRS)

    Montes, Leticia; Bowers, David; Lumia, Ron

    1998-01-01

    This paper discusses the development of a robotic system for general use in an unstructured environment. This is illustrated through pick and place of randomly positioned, un-modeled objects. There are many applications for this project, including rock collection for the Mars Surveyor Program. This system is demonstrated with a Puma560 robot, Barrett hand, Cognex vision system, and Cimetrix simulation and control, all running on a PC. The demonstration consists of two processes: vision system and robotics. The vision system determines the size and location of the unknown objects. The robotics part consists of moving the robot to the object, configuring the hand based on the information from the vision system, then performing the pick/place operation. This work enhances and is a part of the Low Cost Virtual Collaborative Environment which provides remote simulation and control of equipment.

  10. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers

    PubMed Central

    Olivares-Mendez, Miguel A.; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F.; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-01-01

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597

  11. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers.

    PubMed

    Olivares-Mendez, Miguel A; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-12-12

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing.

  12. Factors Affecting Readiness for Low Vision Interventions in Older Adults.

    PubMed

    Mohler, Amanda Jean; Neufeld, Peggy; Perlmutter, Monica S

    2015-01-01

    We sought to identify factors that facilitate and inhibit readiness for low vision interventions in people with vision loss, conceptualized as readiness for change in the way they perform daily activities. We conducted 10 semistructured interviews with older adults with low vision and analyzed the results using grounded theory concepts. Themes involving factors that facilitated change included desire to maintain or regain independence, positive attitude, and presence of formal social support. Themes related to barriers to change included limited knowledge of options and activity not a priority. Themes that acted as both barriers and facilitators were informal social support and community resources. This study provides insight into readiness to make changes in behavior and environment in older adults with vision loss. Study findings can help occupational therapy practitioners practice client-centered care more effectively and promote safe and satisfying daily living activity performance in this population. Copyright © 2015 by the American Occupational Therapy Association, Inc.

  13. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    PubMed

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  14. An Inquiry-Based Vision Science Activity for Graduate Students and Postdoctoral Research Scientists

    NASA Astrophysics Data System (ADS)

    Putnam, N. M.; Maness, H. L.; Rossi, E. A.; Hunter, J. J.

    2010-12-01

    The vision science activity was originally designed for the 2007 Center for Adaptive Optics (CfAO) Summer School. Participants were graduate students, postdoctoral researchers, and professionals studying the basics of adaptive optics. The majority were working in fields outside vision science, mainly astronomy and engineering. The primary goal of the activity was to give participants first-hand experience with the use of a wavefront sensor designed for clinical measurement of the aberrations of the human eye and to demonstrate how the resulting wavefront data generated from these measurements can be used to assess optical quality. A secondary goal was to examine the role wavefront measurements play in the investigation of vision-related scientific questions. In 2008, the activity was expanded to include a new section emphasizing defocus and astigmatism and vision testing/correction in a broad sense. As many of the participants were future post-secondary educators, a final goal of the activity was to highlight the inquiry-based approach as a distinct and effective alternative to traditional laboratory exercises. Participants worked in groups throughout the activity and formative assessment by a facilitator (instructor) was used to ensure that participants made progress toward the content goals. At the close of the activity, participants gave short presentations about their work to the whole group, the major points of which were referenced in a facilitator-led synthesis lecture. We discuss highlights and limitations of the vision science activity in its current format (2008 and 2009 summer schools) and make recommendations for its improvement and adaptation to different audiences.

  15. Advanced biologically plausible algorithms for low-level image processing

    NASA Astrophysics Data System (ADS)

    Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan

    1999-08-01

    At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.

  16. Design and Validation of Exoskeleton Actuated by Soft Modules toward Neurorehabilitation-Vision-Based Control for Precise Reaching Motion of Upper Limb.

    PubMed

    Oguntosin, Victoria W; Mori, Yoshiki; Kim, Hyejong; Nasuto, Slawomir J; Kawamura, Sadao; Hayashi, Yoshikatsu

    2017-01-01

    We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM). Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments.

  17. Design and Validation of Exoskeleton Actuated by Soft Modules toward Neurorehabilitation—Vision-Based Control for Precise Reaching Motion of Upper Limb

    PubMed Central

    Oguntosin, Victoria W.; Mori, Yoshiki; Kim, Hyejong; Nasuto, Slawomir J.; Kawamura, Sadao; Hayashi, Yoshikatsu

    2017-01-01

    We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM). Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments. PMID:28736514

  18. Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-01-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  19. Low Vision Aids in Glaucoma

    PubMed Central

    Khanna, Anjani

    2012-01-01

    A large number of glaucoma patients suffer from vision impairments that qualify as low vision. Additional difficulties associated with low vision include problems with glare, lighting, and contrast, which can make daily activities extremely challenging. This article elaborates on how low vision aids can help with various tasks that visually impaired glaucoma patients need to do each day, to take care of themselves and to lead an independent life. PMID:27990068

  20. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  1. A Proposed Treatment for Visual Field Loss caused by Traumatic Brain Injury using Interactive Visuotactile Virtual Environment

    NASA Astrophysics Data System (ADS)

    Farkas, Attila J.; Hajnal, Alen; Shiratuddin, Mohd F.; Szatmary, Gabriella

    In this paper, we propose a novel approach of using interactive virtual environment technology in Vision Restoration Therapy caused by Traumatic Brain Injury. We called the new system Interactive Visuotactile Virtual Environment and it holds a promise of expanding the scope of already existing rehabilitation techniques. Traditional vision rehabilitation methods are based on passive psychophysical training procedures, and can last up to six months before any modest improvements can be seen in patients. A highly immersive and interactive virtual environment will allow the patient to practice everyday activities such as object identification and object manipulation through the use 3D motion sensoring handheld devices such data glove or the Nintendo Wiimote. Employing both perceptual and action components in the training procedures holds the promise of more efficient sensorimotor rehabilitation. Increased stimulation of visual and sensorimotor areas of the brain should facilitate a comprehensive recovery of visuomotor function by exploiting the plasticity of the central nervous system. Integrated with a motion tracking system and an eye tracking device, the interactive virtual environment allows for the creation and manipulation of a wide variety of stimuli, as well as real-time recording of hand-, eye- and body movements and coordination. The goal of the project is to design a cost-effective and efficient vision restoration system.

  2. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  3. Vision Problems: How Teachers Can Help.

    ERIC Educational Resources Information Center

    Desrochers, Joyce

    1999-01-01

    Describes common vision problems in young children such as myopia, strabismus, and amblyopia. Presents suggestions for helping children with vision problems in the early childhood classroom and in outdoor activities. Lists related resources and children's books. (KB)

  4. 3-D Signal Processing in a Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  5. Intensity measurement of automotive headlamps using a photometric vision system

    NASA Astrophysics Data System (ADS)

    Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.

    1996-01-01

    Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.

  6. Nanosatellite Maneuver Planning for Point Cloud Generation With a Rangefinder

    DTIC Science & Technology

    2015-06-05

    aided active vision systems [11], dense stereo [12], and TriDAR [13]. However, these systems are unsuitable for a nanosatellite system from power, size...command profiles as well as improving the fidelity of gap detection with better filtering methods for background objects . For example, attitude...application of a single beam laser rangefinder (LRF) to point cloud generation, shape detection , and shape reconstruction for a space-based space

  7. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  8. Self calibration of the stereo vision system of the Chang'e-3 lunar rover based on the bundle block adjustment

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan

    2017-06-01

    The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the stereo vision system of the Chang'e-3 lunar rover on the moon.

  9. An embedded vision system for an unmanned four-rotor helicopter

    NASA Astrophysics Data System (ADS)

    Lillywhite, Kirt; Lee, Dah-Jye; Tippetts, Beau; Fowers, Spencer; Dennis, Aaron; Nelson, Brent; Archibald, James

    2006-10-01

    In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.

  10. Predicting Visual Disability in Glaucoma With Combinations of Vision Measures.

    PubMed

    Lin, Stephanie; Mihailovic, Aleksandra; West, Sheila K; Johnson, Chris A; Friedman, David S; Kong, Xiangrong; Ramulu, Pradeep Y

    2018-04-01

    We characterized vision in glaucoma using seven visual measures, with the goals of determining the dimensionality of vision, and how many and which visual measures best model activity limitation. We analyzed cross-sectional data from 150 older adults with glaucoma, collecting seven visual measures: integrated visual field (VF) sensitivity, visual acuity, contrast sensitivity (CS), area under the log CS function, color vision, stereoacuity, and visual acuity with noise. Principal component analysis was used to examine the dimensionality of vision. Multivariable regression models using one, two, or three vision tests (and nonvisual predictors) were compared to determine which was best associated with Rasch-analyzed Glaucoma Quality of Life-15 (GQL-15) person measure scores. The participants had a mean age of 70.2 and IVF sensitivity of 26.6 dB, suggesting mild-to-moderate glaucoma. All seven vision measures loaded similarly onto the first principal component (eigenvectors, 0.220-0.442), which explained 56.9% of the variance in vision scores. In models for GQL scores, the maximum adjusted- R 2 values obtained were 0.263, 0.296, and 0.301 when using one, two, and three vision tests in the models, respectively, though several models in each category had similar adjusted- R 2 values. All three of the best-performing models contained CS. Vision in glaucoma is a multidimensional construct that can be described by several variably-correlated vision measures. Measuring more than two vision tests does not substantially improve models for activity limitation. A sufficient description of disability in glaucoma can be obtained using one to two vision tests, especially VF and CS.

  11. Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft

    DTIC Science & Technology

    2017-06-01

    International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing

  12. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  13. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  14. Multi-arm multilateral haptics-based immersive tele-robotic system (HITS) for improvised explosive device disposal

    NASA Astrophysics Data System (ADS)

    Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir

    2014-06-01

    This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.

  15. Public health nurses' vision of their future reflects changing paradigms.

    PubMed

    Clarke, H F; Beddome, G; Whyte, N B

    1993-01-01

    Health care over the past decade has undergone important changes that have implications for public health nursing. The focus of public health has expanded, as a result of the World Health Organization establishing the goal of "Health for All by the Year 2000," with its strategy of primary health care. To be active participants in this expansion, public health nurses must be more explicit about their current contributions to health care systems; develop nursing frameworks consistent with the systems' changing goals; and articulate their visions of the future. It is clear that the medical paradigm of health care services needs to change to one of primary health care. Based on results of a recent public health nursing research study, a conceptual framework for the future practice of public health nursing was developed.

  16. Robust and efficient vision system for group of cooperating mobile robots with application to soccer robots.

    PubMed

    Klancar, Gregor; Kristan, Matej; Kovacic, Stanislav; Orqueda, Omar

    2004-07-01

    In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.

  17. Latency in Visionic Systems: Test Methods and Requirements

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  18. KSC-04PD-2641

    NASA Technical Reports Server (NTRS)

    2004-01-01

    KENNEDY SPACE CENTER, FLA. Activities at the One NASA Leader-Led Workshop included a panel to answer questions from the audience. Seated here are Lynn Cline, deputy associate administrator for Space Operations, Adm. Craig Steidle, associate administrator for Exploration Systems, and Woodrow Whitlow Jr., Kennedy deputy director. The workshop included senior leadership in the Agency who talked about ongoing Transformation activities and Kennedys role in the Vision for Space Exploration.

  19. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  20. Function-based design process for an intelligent ground vehicle vision system

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.

    2010-10-01

    An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.

  1. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  2. Federal regulation of vision enhancement devices for normal and abnormal vision

    NASA Astrophysics Data System (ADS)

    Drum, Bruce

    2006-09-01

    The Food and Drug Administration (FDA) evaluates the safety and effectiveness of medical devices and biological products as well as food and drugs. The FDA defines a device as a product that is intended, by physical means, to diagnose, treat, or prevent disease, or to affect the structure or function of the body. All vision enhancement devices fulfill this definition because they are intended to affect a function (vision) of the body. In practice, however, FDA historically has drawn a distinction between devices that are intended to enhance low vision as opposed to normal vision. Most low vision aids are therapeutic devices intended to compensate for visual impairment, and are actively regulated according to their level of risk to the patient. The risk level is usually low (e.g. Class I, exempt from 510(k) submission requirements for magnifiers that do not touch the eye), but can be as high as Class III (requiring a clinical trial and Premarket Approval (PMA) application) for certain implanted and prosthetic devices (e.g. intraocular telescopes and prosthetic retinal implants). In contrast, the FDA usually does not actively enforce its regulations for devices that are intended to enhance normal vision, are low risk, and do not have a medical intended use. However, if an implanted or prosthetic device were developed for enhancing normal vision, the FDA would likely decide to regulate it actively, because its intended use would entail a substantial medical risk to the user. Companies developing such devices should contact the FDA at an early stage to clarify their regulatory status.

  3. Computer Vision System For Locating And Identifying Defects In Hardwood Lumber

    NASA Astrophysics Data System (ADS)

    Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.

    1989-03-01

    This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.

  4. Audible vision for the blind and visually impaired in indoor open spaces.

    PubMed

    Yu, Xunyi; Ganz, Aura

    2012-01-01

    In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.

  5. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  6. Proposal of Screening Method of Sleep Disordered Breathing Using Fiber Grating Vision Sensor

    NASA Astrophysics Data System (ADS)

    Aoki, Hirooki; Nakamura, Hidetoshi; Nakajima, Masato

    Every conventional respiration monitoring technique requires at least one sensor to be attached to the body of the subject during measurement, thereby imposing a sense of restraint that results in aversion against measurements that would last over consecutive days. To solve this problem, we developed a respiration monitoring system for sleepers, and it uses a fiber-grating vision sensor, which is a type of active image sensor to achieve non-contact respiration monitoring. In this paper, we verified the effectiveness of the system, and proposed screening method of the sleep disordered breathing. It was shown that our system could equivalently measure the respiration with thermistor and accelerograph. And, the respiratory condition of sleepers can be grasped by our screening method in one look, and it seems to be useful for the support of the screening of sleep disordered breathing.

  7. Remote Safety Monitoring for Elderly Persons Based on Omni-Vision Analysis

    PubMed Central

    Xiang, Yun; Tang, Yi-ping; Ma, Bao-qing; Yan, Hang-chen; Jiang, Jun; Tian, Xu-yuan

    2015-01-01

    Remote monitoring service for elderly persons is important as the aged populations in most developed countries continue growing. To monitor the safety and health of the elderly population, we propose a novel omni-directional vision sensor based system, which can detect and track object motion, recognize human posture, and analyze human behavior automatically. In this work, we have made the following contributions: (1) we develop a remote safety monitoring system which can provide real-time and automatic health care for the elderly persons and (2) we design a novel motion history or energy images based algorithm for motion object tracking. Our system can accurately and efficiently collect, analyze, and transfer elderly activity information and provide health care in real-time. Experimental results show that our technique can improve the data analysis efficiency by 58.5% for object tracking. Moreover, for the human posture recognition application, the success rate can reach 98.6% on average. PMID:25978761

  8. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  9. Three-dimensional vision enhances task performance independently of the surgical method.

    PubMed

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P < 0.0001-0.02). Simple tasks took 25 % to 30 % longer to complete and more complex tasks took 75 % longer with 2D than with 3D vision. Only the difficult task was performed faster with the robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  10. The precision measurement and assembly for miniature parts based on double machine vision systems

    NASA Astrophysics Data System (ADS)

    Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.

    2015-02-01

    In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.

  11. The 3D Recognition, Generation, Fusion, Update and Refinement (RG4) Concept

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Cheeseman, Peter; Smelyanskyi, Vadim N.; Kuehnel, Frank; Morris, Robin D.; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes an active (real time) recognition strategy whereby information is inferred iteratively across several viewpoints in descent imagery. We will show how we use inverse theory within the context of parametric model generation, namely height and spectral reflection functions, to generate model assertions. Using this strategy in an active context implies that, from every viewpoint, the proposed system must refine its hypotheses taking into account the image and the effect of uncertainties as well. The proposed system employs probabilistic solutions to the problem of iteratively merging information (images) from several viewpoints. This involves feeding the posterior distribution from all previous images as a prior for the next view. Novel approaches will be developed to accelerate the inversion search using novel statistic implementations and reducing the model complexity using foveated vision. Foveated vision refers to imagery where the resolution varies across the image. In this paper, we allow the model to be foveated where the highest resolution region is called the foveation region. Typically, the images will have dynamic control of the location of the foveation region. For descent imagery in the Entry, Descent, and Landing (EDL) process, it is possible to have more than one foveation region. This research initiative is directed towards descent imagery in connection with NASA's EDL applications. Three-Dimensional Model Recognition, Generation, Fusion, Update, and Refinement (RGFUR or RG4) for height and the spectral reflection characteristics are in focus for various reasons, one of which is the prospect that their interpretation will provide for real time active vision for automated EDL.

  12. Robotic vision techniques for space operations

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar

    1994-01-01

    Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.

  13. Determining consequences of retinal membrane guanylyl cyclase (RetGC1) deficiency in human Leber congenital amaurosis en route to therapy: residual cone-photoreceptor vision correlates with biochemical properties of the mutants

    PubMed Central

    Jacobson, Samuel G.; Cideciyan, Artur V.; Peshenko, Igor V.; Sumaroka, Alexander; Olshevskaya, Elena V.; Cao, Lihui; Schwartz, Sharon B.; Roman, Alejandro J.; Olivares, Melani B.; Sadigh, Sam; Yau, King-Wai; Heon, Elise; Stone, Edwin M.; Dizhoor, Alexander M.

    2013-01-01

    The GUCY2D gene encodes retinal membrane guanylyl cyclase (RetGC1), a key component of the phototransduction machinery in photoreceptors. Mutations in GUCY2D cause Leber congenital amaurosis type 1 (LCA1), an autosomal recessive human retinal blinding disease. The effects of RetGC1 deficiency on human rod and cone photoreceptor structure and function are currently unknown. To move LCA1 closer to clinical trials, we characterized a cohort of patients (ages 6 months—37 years) with GUCY2D mutations. In vivo analyses of retinal architecture indicated intact rod photoreceptors in all patients but abnormalities in foveal cones. By functional phenotype, there were patients with and those without detectable cone vision. Rod vision could be retained and did not correlate with the extent of cone vision or age. In patients without cone vision, rod vision functioned unsaturated under bright ambient illumination. In vitro analyses of the mutant alleles showed that in addition to the major truncation of the essential catalytic domain in RetGC1, some missense mutations in LCA1 patients result in a severe loss of function by inactivating its catalytic activity and/or ability to interact with the activator proteins, GCAPs. The differences in rod sensitivities among patients were not explained by the biochemical properties of the mutants. However, the RetGC1 mutant alleles with remaining biochemical activity in vitro were associated with retained cone vision in vivo. We postulate a relationship between the level of RetGC1 activity and the degree of cone vision abnormality, and argue for cone function being the efficacy outcome in clinical trials of gene augmentation therapy in LCA1. PMID:23035049

  14. Always-on low-power optical system for skin-based touchless machine control.

    PubMed

    Lecca, Michela; Gottardi, Massimo; Farella, Elisabetta; Milosevic, Bojan

    2016-06-01

    Embedded vision systems are smart energy-efficient devices that capture and process a visual signal in order to extract high-level information about the surrounding observed world. Thanks to these capabilities, embedded vision systems attract more and more interest from research and industry. In this work, we present a novel low-power optical embedded system tailored to detect the human skin under various illuminant conditions. We employ the presented sensor as a smart switch to activate one or more appliances connected to it. The system is composed of an always-on low-power RGB color sensor, a proximity sensor, and an energy-efficient microcontroller (MCU). The architecture of the color sensor allows a hardware preprocessing of the RGB signal, which is converted into the rg space directly on chip reducing the power consumption. The rg signal is delivered to the MCU, where it is classified as skin or non-skin. Each time the signal is classified as skin, the proximity sensor is activated to check the distance of the detected object. If it appears to be in the desired proximity range, the system detects the interaction and switches on/off the connected appliances. The experimental validation of the proposed system on a prototype shows that processing both distance and color remarkably improves the performance of the two separated components. This makes the system a promising tool for energy-efficient, touchless control of machines.

  15. Computer vision for foreign body detection and removal in the food industry

    USDA-ARS?s Scientific Manuscript database

    Computer vision inspection systems are often used for quality control, product grading, defect detection and other product evaluation issues. This chapter focuses on the use of computer vision inspection systems that detect foreign bodies and remove them from the product stream. Specifically, we wi...

  16. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  17. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  18. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  19. An architecture for real-time vision processing

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong

    1994-01-01

    To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.

  20. Understanding the pusher behavior of some stroke patients with spatial deficits: a pilot study.

    PubMed

    Pérennou, Dominic Alain; Amblard, Bernard; Laassel, El Mostafa; Benaim, Charles; Hérisson, Christian; Pélissier, Jacques

    2002-04-01

    To investigate whether pusher behavior (ie, a tendency among stroke patients with spatial deficits to actively push away from the nonparalyzed side and to resist any attempt to hold a more upright posture) affects only the trunk, for which gravitational feedback is given by somesthetic information, or the head as well, whose gravitational information is mainly given by the vestibular system (without vision). Description and measurement of clinical features. Rehabilitation center research laboratory. Eight healthy subjects age matched to 14 patients with left hemiplegia resulting from right-hemisphere stroke (3 pushers showing a severe spatial neglect, 11 without pusher behavior). All participants were asked to actively maintain an erect posture while sitting for 8 seconds on a rocking, laterally unstable platform. The task was performed with (in light) and without (in darkness) vision. The number of trials needed to succeed in the task was monitored. In successful trials, head, shoulders, thoracolumbar spine, and pelvis orientation in roll were measured by means of an automated, optical television image processor. Compared with other patients and healthy subjects, the 3 pushers missed many more trials and displayed a contralesional tilt of the pelvis but kept a correct head orientation. This tilt was especially pronounced without vision. Spatial neglect was a key factor, explaining 56% of patients' misorientation behavior with vision and 61% without vision. This pilot kinematic analysis shows that pusher behavior does not result from disrupted processing of vestibular information (eg, caused by a lesion involving the vestibular cortex); rather, it results from a high-order disruption in the processing of somesthetic information originating in the left hemibody, which could be graviceptive neglect (extinction). This disruption leads pushers to actively adjust their body posture to a subjective vertical biased to the side opposite the cerebral lesion. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation

  1. Measuring exertion time, duty cycle and hand activity level for industrial tasks using computer vision.

    PubMed

    Akkas, Oguz; Lee, Cheng Hsien; Hu, Yu Hen; Harris Adamson, Carisa; Rempel, David; Radwin, Robert G

    2017-12-01

    Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.

  2. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  3. The research on calibration methods of dual-CCD laser three-dimensional human face scanning system

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong

    2013-09-01

    In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.

  4. A Machine Vision Quality Control System for Industrial Acrylic Fibre Production

    NASA Astrophysics Data System (ADS)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. Brázio; Dinis, João

    2002-12-01

    This paper describes the implementation of INFIBRA, a machine vision system used in the quality control of acrylic fibre production. The system was developed by INETI under a contract with a leading industrial manufacturer of acrylic fibres. It monitors several parameters of the acrylic production process. This paper presents, after a brief overview of the system, a detailed description of the machine vision algorithms developed to perform the inspection tasks unique to this system. Some of the results of online operation are also presented.

  5. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  6. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  7. Connectionist model-based stereo vision for telerobotics

    NASA Technical Reports Server (NTRS)

    Hoff, William; Mathis, Donald

    1989-01-01

    Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.

  8. Technology for Work, Home, and Leisure. Tech Use Guide: Using Computer Technology.

    ERIC Educational Resources Information Center

    Williams, John M.

    This guide provides a brief introduction to several types of technological devices useful to individuals with disabilities and illustrates how some individuals are applying technology in the workplace and at home. Devices described include communication aids, low-vision products, voice-activated systems, environmental controls, and aids for…

  9. Texas K-16 Reform: The El Paso Story.

    ERIC Educational Resources Information Center

    Bristol, Jack

    1999-01-01

    The University of Texas at El Paso has provided leadership and support for several collaborative K-16 reform activities. A closed-loop, K-12, preservice teacher preparation system supported by generous extramural funding has provided the university, community college, and local schools with opportunities for conversation, shared vision, and…

  10. Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)

    NASA Astrophysics Data System (ADS)

    Ashcraft, Todd W.; Atac, Robert

    2012-06-01

    Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.

  11. A Multiple Sensor Machine Vision System Technology for the Hardwood

    Treesearch

    Richard W. Conners; D.Earl Kline; Philip A. Araman

    1995-01-01

    For the last few years the authors have been extolling the virtues of a multiple sensor approach to hardwood defect detection. Since 1989 the authors have actively been trying to develop such a system. This paper details some of the successes and failures that have been experienced to date. It also discusses what remains to be done and gives time lines for the...

  12. The Vision of Readiness of Teacher Training Colleges for Accepting New Educational Technologies and Models on the Way to Europe

    ERIC Educational Resources Information Center

    Tatkovic, Nevenka

    2005-01-01

    On the way to enter the European educational space, the Croatian higher educational system attempts to come to terms with the conclusions of the Bologna Declaration and undertake the reform of the higher education of the Republic of Croatia and introduce the ECTS points- system. Intensive activities in connection with the making of the new…

  13. Vision 21: The NASA strategic plan

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The NASA Strategic Plan, Vision 21, is a living roadmap to the future to guide the men and women of the NASA team as they ensure U.S. leadership in space exploration and aeronautics research. This multiyear plan consists of a set of programs and activities that will retain our leadership in space science and the exploration of the solar system; help rebuild our nation's technology base and strengthen our leadership in aviation and other key industries; encourage commercial applications of space technology; use the unique perspective of space to better understand our home planet; provide the U.S. and its partners with a permanent space based research facility; expand on the legacy of Apollo and initiate precursor activities to establish a lunar base; and allow us a journey into tomorrow, journey to another planet (Mars), and beyond.

  14. COMPARISON OF RECENTLY USED PHACOEMULSIFICATION SYSTEMS USING A HEALTH TECHNOLOGY ASSESSMENT METHOD.

    PubMed

    Huang, Jiannan; Wang, Qi; Zhao, Caimin; Ying, Xiaohua; Zou, Haidong

    2017-01-01

    To compare the recently used phacoemulsification systems using a health technology assessment (HTA) model. A self-administered questionnaire, which included questions to gauge on the opinions of the recently used phacoemulsification systems, was distributed to the chief cataract surgeons in the departments of ophthalmology of eighteen tertiary hospitals in Shanghai, China. A series of senile cataract patients undergoing phacoemulsification surgery were enrolled in the study. The surgical results and the average costs related to their surgeries were all recorded and compared for the recently used phacoemulsification systems. The four phacoemulsification systems currently used in Shanghai are the Infiniti Vision, Centurion Vision, WhiteStar Signature, and Stellaris Vision Enhancement systems. All of the doctors confirmed that the systems they used would help cataract patients recover vision. A total of 150 cataract patients who underwent phacoemulsification surgery were enrolled in the present study. A significant difference was found among the four groups in cumulative dissipated energy, with the lowest value found in the Centurion group. No serious complications were observed and a positive trend in visual acuity was found in all four groups after cataract surgery. The highest total cost of surgery was associated with procedures conducted using the Centurion Vision system, and significant differences between systems were mainly because of the cost of the consumables used in the different surgeries. This HTA comparison of four recently used phacoemulsification systems found that each of system offers a satisfactory vision recovery outcome, but differs in surgical efficacy and costs.

  15. Color vision deficiencies and the child's willingness for visual activity: preliminary research

    NASA Astrophysics Data System (ADS)

    Geniusz, Malwina; Szmigiel, Marta; Geniusz, Maciej

    2017-09-01

    After a few weeks a newborn baby can recognize high contrasts in colors like black and white. They reach full color vision at the age of circa six months. Matching colors is the next milestone. Most children can do it at the age of two. Good color vision is one of the factors which indicate proper development of a child. Presented research shows the correlation between color vision and visual activity. The color vision of a group of children aged 3-8 was examined with saturated Farnsworth D-15. Fransworth test was performed twice - in a standard version and in a magnetic version. The time of completing standard and magnetic tests was measured. Furthermore, parents of subjects answered questions checking the children's visual activity in 1 - 10 scale. Parents stated whether the child willingly watched books, colored coloring books, put puzzles or liked to play with blocks etc. The Fransworth D-15 test designed for color vision testing can be used to test younger children from the age of 3 years. These are preliminary studies which may be a useful tool for further, more accurate examination on a larger group of subjects.

  16. Task-focused modeling in automated agriculture

    NASA Astrophysics Data System (ADS)

    Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack

    1993-01-01

    Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.

  17. A neural network based artificial vision system for licence plate recognition.

    PubMed

    Draghici, S

    1997-02-01

    This paper presents a neural network based artificial vision system able to analyze the image of a car given by a camera, locate the registration plate and recognize the registration number of the car. The paper describes in detail various practical problems encountered in implementing this particular application and the solutions used to solve them. The main features of the system presented are: controlled stability-plasticity behavior, controlled reliability threshold, both off-line and on-line learning, self assessment of the output reliability and high reliability based on high level multiple feedback. The system has been designed using a modular approach. Sub-modules can be upgraded and/or substituted independently, thus making the system potentially suitable in a large variety of vision applications. The OCR engine was designed as an interchangeable plug-in module. This allows the user to choose an OCR engine which is suited to the particular application and to upgrade it easily in the future. At present, there are several versions of this OCR engine. One of them is based on a fully connected feedforward artificial neural network with sigmoidal activation functions. This network can be trained with various training algorithms such as error backpropagation. An alternative OCR engine is based on the constraint based decomposition (CBD) training architecture. The system has showed the following performances (on average) on real-world data: successful plate location and segmentation about 99%, successful character recognition about 98% and successful recognition of complete registration plates about 80%.

  18. Health system vision of iran in 2025.

    PubMed

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Vision statement in evolutionary plan of health system is considered to be :"a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region(1) and with the regarding to health in all policies, accountability and innovation". An explanatory context was compiled either to create a complete image of the vision. Social values and leaders' strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders' strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system.

  19. 77 FR 36331 - Nineteenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-18

    ... Document--Draft DO-XXX, Minimum Aviation Performance Standards (MASPS) for an Enhanced Flight Vision System... Discussion (9:00 a.m.-5:00 p.m.) Provide Comment Resolution of Document--Draft DO-XXX, Minimum Aviation.../Approve FRAC Draft for PMC Consideration--Draft DO- XXX, Minimum Aviation Performance Standards (MASPS...

  20. Robust Spatial Autoregressive Modeling for Hardwood Log Inspection

    Treesearch

    Dongping Zhu; A.A. Beex

    1994-01-01

    We explore the application of a stochastic texture modeling method toward a machine vision system for log inspection in the forest products industry. This machine vision system uses computerized tomography (CT) imaging to locate and identify internal defects in hardwood logs. The application of CT to such industrial vision problems requires efficient and robust image...

  1. Augmentation of Cognition and Perception Through Advanced Synthetic Vision Technology

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.; Arthur, Jarvis J.; Williams, Steve P.; McNabb, Jennifer

    2005-01-01

    Synthetic Vision System technology augments reality and creates a virtual visual meteorological condition that extends a pilot's cognitive and perceptual capabilities during flight operations when outside visibility is restricted. The paper describes the NASA Synthetic Vision System for commercial aviation with an emphasis on how the technology achieves Augmented Cognition objectives.

  2. Industrial Inspection with Open Eyes: Advance with Machine Vision Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Niel, Kurt

    Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less

  3. Sensation during Active Behaviors

    PubMed Central

    Cardin, Jessica A.; Chiappe, M. Eugenia; Halassa, Michael M.; McGinley, Matthew J.; Yamashita, Takayuki

    2017-01-01

    A substantial portion of our sensory experience happens during active behaviors such as walking around or paying attention. How do sensory systems work during such behaviors? Neural processing in sensory systems can be shaped by behavior in multiple ways ranging from a modulation of responsiveness or sharpening of tuning to a dynamic change of response properties or functional connectivity. Here, we review recent findings on the modulation of sensory processing during active behaviors in different systems: insect vision, rodent thalamus, and rodent sensory cortices. We discuss the circuit-level mechanisms that might lead to these modulations and their potential role in sensory function. Finally, we highlight the open questions and future perspectives of this exciting new field. PMID:29118211

  4. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  5. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    PubMed

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  6. Impact of low vision rehabilitation on functional vision performance of children with visual impairment.

    PubMed

    Ganesh, Suma; Sethi, Sumita; Srivastav, Sonia; Chaudhary, Amrita; Arora, Priyanka

    2013-09-01

    To evaluate the impact of low vision rehabilitation on functional vision of children with visual impairment. The LV Prasad-Functional Vision Questionnaire, designed specifically to measure functional performance of visually impaired children of developing countries, was used to assess the level of difficulty in performing various tasks pre and post visual rehabilitation in children with documented visual impairment. Chi-square test was used to assess the impact of rehabilitation intervention on functional vision performance; a P < 0.05 was considered significant. LogMAR visual acuity prior to the introduction of low vision devices (LVDs) was 0.90 ± 0.05 for distance and for near it was 0.61 ± 0.05. After the intervention, the acuities improved significantly for distance (0.2 ± 0.27; P < 0.0001) and near (0.42 ± 0.17; P = 0.001). The most common reported difficulties were related to their academic activities like copying from the blackboard (80%), reading textbook at arm's length (77.2%), and writing along a straight line (77.2%). Absolute raw score of disability pre-LVD was 15.05 which improved to 7.58 post-LVD. An improvement in functional vision post visual rehabilitation was especially found in those activities related to their studying lifestyle like copying from the blackboard (P < 0.0001), reading textbook at arm's length (P < 0.0001), and writing along a straight line (P = 0.003). In our study group, there was a significant improvement in functional vision post visual rehabilitation, especially with those activities which are related to their academic output. It is important for these children to have an early visual rehabilitation to decrease the impairment associated with these decreased visual output and to enhance their learning abilities.

  7. Power Subsystem for Extravehicular Activities for Exploration Missions

    NASA Technical Reports Server (NTRS)

    Manzo, Michelle

    2005-01-01

    The NASA Glenn Research Center has the responsibility to develop the next generation space suit power subsystem to support the Vision for Space Exploration. Various technology challenges exist in achieving extended duration missions as envisioned for future lunar and Mars mission scenarios. This paper presents an overview of ongoing development efforts undertaken at the Glenn Research Center in support of power subsystem development for future extravehicular activity systems.

  8. Real-time image processing of TOF range images using a reconfigurable processor system

    NASA Astrophysics Data System (ADS)

    Hussmann, S.; Knoll, F.; Edeler, T.

    2011-07-01

    During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.

  9. Tracking by Identification Using Computer Vision and Radio

    PubMed Central

    Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez

    2013-01-01

    We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485

  10. Design of a surgical robot with dynamic vision field control for Single Port Endoscopic Surgery.

    PubMed

    Kobayashi, Yo; Sekiguchi, Yuta; Tomono, Yu; Watanabe, Hiroki; Toyoda, Kazutaka; Konishi, Kozo; Tomikawa, Morimasa; Ieiri, Satoshi; Tanoue, Kazuo; Hashizume, Makoto; Fujie, Masaktsu G

    2010-01-01

    Recently, a robotic system was developed to assist Single Port Endoscopic Surgery (SPS). However, the existing system required a manual change of vision field, hindering the surgical task and increasing the degrees of freedom (DOFs) of the manipulator. We proposed a surgical robot for SPS with dynamic vision field control, the endoscope view being manipulated by a master controller. The prototype robot consisted of a positioning and sheath manipulator (6 DOF) for vision field control, and dual tool tissue manipulators (gripping: 5DOF, cautery: 3DOF). Feasibility of the robot was demonstrated in vitro. The "cut and vision field control" (using tool manipulators) is suitable for precise cutting tasks in risky areas while a "cut by vision field control" (using a vision field control manipulator) is effective for rapid macro cutting of tissues. A resection task was accomplished using a combination of both methods.

  11. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  12. FLORA™: Phase I development of a functional vision assessment for prosthetic vision users.

    PubMed

    Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy

    2015-07-01

    Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population to evaluate the impact of new vision-restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional visual ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional visual tasks for observation of performance and a case narrative summary. Results were analysed to determine whether the interview questions and functional visual tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, 26 subjects were assessed with the FLORA. Seven different evaluators administered the assessment. All 14 interview questions were asked. All 35 tasks for functional vision were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options—impossible (33 per cent), difficult (23 per cent), moderate (24 per cent) and easy (19 per cent)—were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with 'vision only' occurring 75 per cent on average with the System ON, and 29 per cent with the System OFF. The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as an assessment tool for functional vision and well-being. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  13. The role of vision processing in prosthetic vision.

    PubMed

    Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette

    2012-01-01

    Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.

  14. Hierarchical Modelling Of Mobile, Seeing Robots

    NASA Astrophysics Data System (ADS)

    Luh, Cheng-Jye; Zeigler, Bernard P.

    1990-03-01

    This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.

  15. Hierarchical modelling of mobile, seeing robots

    NASA Technical Reports Server (NTRS)

    Luh, Cheng-Jye; Zeigler, Bernard P.

    1990-01-01

    This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.

  16. Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.

    1992-01-01

    This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.

  17. Marking parts to aid robot vision

    NASA Technical Reports Server (NTRS)

    Bales, J. W.; Barker, L. K.

    1981-01-01

    The premarking of parts for subsequent identification by a robot vision system appears to be beneficial as an aid in the automation of certain tasks such as construction in space. A simple, color coded marking system is presented which allows a computer vision system to locate an object, calculate its orientation, and determine its identity. Such a system has the potential to operate accurately, and because the computer shape analysis problem has been simplified, it has the ability to operate in real time.

  18. Feasibility of a clinical trial of vision therapy for treatment of amblyopia.

    PubMed

    Lyon, Don W; Hopkins, Kristine; Chu, Raymond H; Tamkins, Susanna M; Cotter, Susan A; Melia, B Michele; Holmes, Jonathan M; Repka, Michael X; Wheeler, David T; Sala, Nicholas A; Dumas, Janette; Silbert, David I

    2013-05-01

    We conducted a pilot randomized clinical trial of office-based active vision therapy for the treatment of childhood amblyopia to determine the feasibility of conducting a full-scale randomized clinical trial. A training and certification program and manual of procedures were developed to certify therapists to administer a standardized vision therapy program in ophthalmology and optometry offices consisting of weekly visits for 16 weeks. Nineteen children, aged 7 to less than 13 years, with amblyopia (20/40-20/100) were randomly assigned to receive either 2 hours of daily patching with active vision therapy or 2 hours of daily patching with placebo vision therapy. Therapists in diverse practice settings were successfully trained and certified to perform standardized vision therapy in strict adherence with protocol. Subjects completed 85% of required weekly in-office vision therapy visits. Eligibility criteria based on age, visual acuity, and stereoacuity, designed to identify children able to complete a standardized vision therapy program and judged likely to benefit from this treatment, led to a high proportion of screened subjects being judged ineligible, resulting in insufficient recruitment. There were difficulties in retrieving adherence data for the computerized home therapy procedures. This study demonstrated that a 16-week treatment trial of vision therapy was feasible with respect to maintaining protocol adherence; however, recruitment under the proposed eligibility criteria, necessitated by the standardized approach to vision therapy, was not successful. A randomized clinical trial of in-office vision therapy for the treatment of amblyopia would require broadening of the eligibility criteria and improved methods to gather objective data regarding the home therapy. A more flexible approach that customizes vision therapy based on subject age, visual acuity, and stereopsis might be required to allow enrollment of a broader group of subjects.

  19. Feasibility of a Clinical Trial of Vision Therapy for Treatment of Amblyopia

    PubMed Central

    Lyon, Don W.; Hopkins, Kristine; Chu, Raymond H.; Tamkins, Susanna M.; Cotter, Susan A.; Melia, B. Michele; Holmes, Jonathan M.; Repka, Michael X.; Wheeler, David T.; Sala, Nicholas A.; Dumas, Janette; Silbert, David I.

    2013-01-01

    Purpose We conducted a pilot randomized clinical trial of office-based active vision therapy for the treatment of childhood amblyopia to determine the feasibility of conducting a full-scale randomized clinical trial. Methods A training and certification program and manual of procedures were developed to certify therapists to administer a standardized vision therapy program in ophthalmology and optometry offices consisting of weekly visits for 16 weeks. Nineteen children, 7 to less than 13 years of age, with amblyopia (20/40–20/100) were randomly assigned to receive either 2 hours of daily patching with active vision therapy or 2 hours of daily patching with placebo vision therapy. Results Therapists in diverse practice settings were successfully trained and certified to perform standardized vision therapy in strict adherence with protocol. Subjects completed 85% of required weekly in-office vision therapy visits. Eligibility criteria based on age, visual acuity, and stereoacuity, designed to identify children able to complete a standardized vision therapy program and judged likely to benefit from this treatment, led to a high proportion of screened subjects being judged ineligible, resulting in insufficient recruitment. There were difficulties in retrieving adherence data for the computerized home therapy procedures. Conclusions This study demonstrated that a 16-week treatment trial of vision therapy was feasible with respect to maintaining protocol adherence; however, recruitment under the proposed eligibility criteria, necessitated by the standardized approach to vision therapy, was not successful. A randomized clinical trial of in-office vision therapy for the treatment of amblyopia would require broadening of the eligibility criteria and improved methods to gather objective data regarding the home therapy. A more flexible approach that customizes vision therapy based on subject age, visual acuity, and stereopsis, might be required to allow enrollment of a broader group of subjects. PMID:23563444

  20. ROBOSIGHT: Robotic Vision System For Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh

    1989-02-01

    Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.

  1. Virtual Reality System Offers a Wide Perspective

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Robot Systems Technology Branch engineers at Johnson Space Center created the remotely controlled Robonaut for use as an additional "set of hands" in extravehicular activities (EVAs) and to allow exploration of environments that would be too dangerous or difficult for humans. One of the problems Robonaut developers encountered was that the robot s interface offered an extremely limited field of vision. Johnson robotics engineer, Darby Magruder, explained that the 40-degree field-of-view (FOV) in initial robotic prototypes provided very narrow tunnel vision, which posed difficulties for Robonaut operators trying to see the robot s surroundings. Because of the narrow FOV, NASA decided to reach out to the private sector for assistance. In addition to a wider FOV, NASA also desired higher resolution in a head-mounted display (HMD) with the added ability to capture and display video.

  2. Classification of Normal and Pathological Gait in Young Children Based on Foot Pressure Data.

    PubMed

    Guo, Guodong; Guffey, Keegan; Chen, Wenbin; Pergami, Paola

    2017-01-01

    Human gait recognition, an active research topic in computer vision, is generally based on data obtained from images/videos. We applied computer vision technology to classify pathology-related changes in gait in young children using a foot-pressure database collected using the GAITRite walkway system. As foot positioning changes with children's development, we also investigated the possibility of age estimation based on this data. Our results demonstrate that the data collected by the GAITRite system can be used for normal/pathological gait classification. Combining age information and normal/pathological gait classification increases the accuracy of the classifier. This novel approach could support the development of an accurate, real-time, and economic measure of gait abnormalities in children, able to provide important feedback to clinicians regarding the effect of rehabilitation interventions, and to support targeted treatment modifications.

  3. Improving Vision-Based Motor Rehabilitation Interactive Systems for Users with Disabilities Using Mirror Feedback

    PubMed Central

    Martínez-Bueso, Pau; Moyà-Alcover, Biel

    2014-01-01

    Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (T s) and time-to-complete (T c)). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (T s = 7.09 (P < 0.001) and T c = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310

  4. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    NASA Astrophysics Data System (ADS)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  5. An overview of computer vision

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  6. Space construction activities

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Center for Space Construction at the University of Colorado at Boulder was established in 1988 as a University Space Engineering Research Center. The mission of the Center is to conduct interdisciplinary engineering research which is critical to the construction of future space structures and systems and to educate students who will have the vision and technical skills to successfully lead future space construction activities. The research activities are currently organized around two central projects: Orbital Construction and Lunar Construction. Summaries of the research projects are included.

  7. The Efficacy of Optometric Vision Therapy.

    ERIC Educational Resources Information Center

    Journal of the American Optometric Association, 1988

    1988-01-01

    This review aims to document the efficacy and validity of vision therapy for modifying and improving vision functioning. The paper describes the essential components of the visual system and disorders which can be physiologically and clinically identified. Vision therapy is defined as a clinical approach for correcting and ameliorating the effects…

  8. Functional vision and cognition in infants with congenital disorders of the peripheral visual system.

    PubMed

    Dale, Naomi; Sakkalou, Elena; O'Reilly, Michelle; Springall, Clare; De Haan, Michelle; Salt, Alison

    2017-07-01

    To investigate how vision relates to early development by studying vision and cognition in a national cohort of 1-year-old infants with congenital disorders of the peripheral visual system and visual impairment. This was a cross-sectional observational investigation of a nationally recruited cohort of infants with 'simple' and 'complex' congenital disorders of the peripheral visual system. Entry age was 8 to 16 months. Vision level (Near Detection Scale) and non-verbal cognition (sensorimotor understanding, Reynell Zinkin Scales) were assessed. Parents completed demographic questionnaires. Of 90 infants (49 males, 41 females; mean 13mo, standard deviation [SD] 2.5mo; range 7-17mo); 25 (28%) had profound visual impairment (light perception at best) and 65 (72%) had severe visual impairment (basic 'form' vision). The Near Detection Scale correlated significantly with sensorimotor understanding developmental quotients in the 'total', 'simple', and 'complex' groups (all p<0.001). Age and vision accounted for 48% of sensorimotor understanding variance. Infants with profound visual impairment, especially in the 'complex' group with congenital disorders of the peripheral visual system with known brain involvement, showed the greatest cognitive delay. Lack of vision is associated with delayed early-object manipulative abilities and concepts; 'form' vision appeared to support early developmental advance. This paper provides baseline characteristics for cross-sectional and longitudinal follow-up investigations in progress. A methodological strength of the study was the representativeness of the cohort according to national epidemiological and population census data. © 2017 Mac Keith Press.

  9. Low computation vision-based navigation for a Martian rover

    NASA Technical Reports Server (NTRS)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  10. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  11. Software model of a machine vision system based on the common house fly.

    PubMed

    Madsen, Robert; Barrett, Steven; Wilcox, Michael

    2005-01-01

    The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.

  12. Professional School Counselors as Leaders and Active Participants in School Reform: A Phenomenological Exploratory Study to Examine the Perspectives of System-Level Supervisors of School Counselors

    ERIC Educational Resources Information Center

    Cicero, Gayle M.

    2010-01-01

    Professional school counselors' leadership capacity may well play a pivotal role in educational reform in the twenty-first century. Crucial to the success of this vision, supported by the American School Counseling Association, is the perspective of system-level supervisors of school counselors. This exploratory qualitative study employed in-depth…

  13. A Vision-Based Wayfinding System for Visually Impaired People Using Situation Awareness and Activity-Based Instructions

    PubMed Central

    Kim, Eun Yi

    2017-01-01

    A significant challenge faced by visually impaired people is ‘wayfinding’, which is the ability to find one’s way to a destination in an unfamiliar environment. This study develops a novel wayfinding system for smartphones that can automatically recognize the situation and scene objects in real time. Through analyzing streaming images, the proposed system first classifies the current situation of a user in terms of their location. Next, based on the current situation, only the necessary context objects are found and interpreted using computer vision techniques. It estimates the motions of the user with two inertial sensors and records the trajectories of the user toward the destination, which are also used as a guide for the return route after reaching the destination. To efficiently convey the recognized results using an auditory interface, activity-based instructions are generated that guide the user in a series of movements along a route. To assess the effectiveness of the proposed system, experiments were conducted in several indoor environments: the sit in which the situation awareness accuracy was 90% and the object detection false alarm rate was 0.016. In addition, our field test results demonstrate that users can locate their paths with an accuracy of 97%. PMID:28813033

  14. Management of Knowledge Representation Standards Activities

    NASA Technical Reports Server (NTRS)

    Patil, Ramesh S.

    1993-01-01

    Ever since the mid-seventies, researchers have recognized that capturing knowledge is the key to building large and powerful AI systems. In the years since, we have also found that representing knowledge is difficult and time consuming. In spite of the tools developed to help with knowledge acquisition, knowledge base construction remains one of the major costs in building an Al system: For almost every system we build, a new knowledge base must be constructed from scratch. As a result, most systems remain small to medium in size. Even if we build several systems within a general area, such as medicine or electronics diagnosis, significant portions of the domain must be represented for every system we create. The cost of this duplication of effort has been high and will become prohibitive as we attempt to build larger and larger systems. To overcome this barrier we must find ways of preserving existing knowledge bases and of sharing, re-using, and building on them. This report describes the efforts undertaken over the last two years to identify the issues underlying the current difficulties in sharing and reuse, and a community wide initiative to overcome them. First, we discuss four bottlenecks to sharing and reuse, present a vision of a future in which these bottlenecks have been ameliorated, and describe the efforts of the initiative's four working groups to address these bottlenecks. We then address the supporting technology and infrastructure that is critical to enabling the vision of the future. Finally, we consider topics of longer-range interest by reviewing some of the research issues raised by our vision.

  15. [Focus on popular science education of glaucoma and reduce glaucomatous low vision and blindness].

    PubMed

    Sun, X H

    2017-02-11

    The prevention of blindness caused by glaucoma is a difficult task. In order to accomplish the task better, we need the participation of whole society and popularize relevant medical knowledge. Popular science and related knowledge of glaucoma are needed to the people especially for high risk population. If people know glaucoma better and actively join the screening of glaucoma, we can find and diagnose glaucoma earlier, avoid late treatment and reduce the glaucomatous visual function impairment. For patients who had been diagnosed with glaucoma, they should aware and accept new medical concept and technique through systemic popular science education. They should actively participate the whole procedure of the disease management and improve their compliance and confidence. Academic organization of ophthalmology should participate and guide the patient education, improve the individualized comprehensive health care for the diagnosis and treatment and third-order health care system that is suitable for the condition of our country, help improving the prognosis of glaucoma and life quality related to vision for advanced and late glaucoma. (Chin J Ophthalmol, 2017, 53: 81-84) .

  16. Health System Vision of Iran in 2025

    PubMed Central

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Background: Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. Method: After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Results: Vision statement in evolutionary plan of health system is considered to be :“a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region1 and with the regarding to health in all policies, accountability and innovation”. An explanatory context was compiled either to create a complete image of the vision. Conclusion: Social values and leaders’ strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders’ strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system. PMID:23865011

  17. Transformational Spaceport and Range Concept of Operations: A Vision to Transform Ground and Launch Operations

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Transformational Concept of Operations (CONOPS) provides a long-term, sustainable vision for future U.S. space transportation infrastructure and operations. This vision presents an interagency concept, developed cooperatively by the Department of Defense (DoD), the Federal Aviation Administration (FAA), and the National Aeronautics and Space Administration (NASA) for the upgrade, integration, and improved operation of major infrastructure elements of the nation s space access systems. The interagency vision described in the Transformational CONOPS would transform today s space launch infrastructure into a shared system that supports worldwide operations for a variety of users. The system concept is sufficiently flexible and adaptable to support new types of missions for exploration, commercial enterprise, and national security, as well as to endure further into the future when space transportation technology may be sufficiently advanced to enable routine public space travel as part of the global transportation system. The vision for future space transportation operations is based on a system-of-systems architecture that integrates the major elements of the future space transportation system - transportation nodes (spaceports), flight vehicles and payloads, tracking and communications assets, and flight traffic coordination centers - into a transportation network that concurrently accommodates multiple types of mission operators, payloads, and vehicle fleets. This system concept also establishes a common framework for defining a detailed CONOPS for the major elements of the future space transportation system. The resulting set of four CONOPS (see Figure 1 below) describes the common vision for a shared future space transportation system (FSTS) infrastructure from a variety of perspectives.

  18. Enhancement of vision by monocular deprivation in adult mice.

    PubMed

    Prusky, Glen T; Alam, Nazia M; Douglas, Robert M

    2006-11-08

    Plasticity of vision mediated through binocular interactions has been reported in mammals only during a "critical" period in juvenile life, wherein monocular deprivation (MD) causes an enduring loss of visual acuity (amblyopia) selectively through the deprived eye. Here, we report a different form of interocular plasticity of vision in adult mice in which MD leads to an enhancement of the optokinetic response (OKR) selectively through the nondeprived eye. Over 5 d of MD, the spatial frequency sensitivity of the OKR increased gradually, reaching a plateau of approximately 36% above pre-deprivation baseline. Eye opening initiated a gradual decline, but sensitivity was maintained above pre-deprivation baseline for 5-6 d. Enhanced function was restricted to the monocular visual field, notwithstanding the dependence of the plasticity on binocular interactions. Activity in visual cortex ipsilateral to the deprived eye was necessary for the characteristic induction of the enhancement, and activity in visual cortex contralateral to the deprived eye was necessary for its maintenance after MD. The plasticity also displayed distinct learning-like properties: Active testing experience was required to attain maximal enhancement and for enhancement to persist after MD, and the duration of enhanced sensitivity after MD was extended by increasing the length of MD, and by repeating MD. These data show that the adult mouse visual system maintains a form of experience-dependent plasticity in which the visual cortex can modulate the normal function of subcortical visual pathways.

  19. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  20. The genetics of normal and defective color vision

    PubMed Central

    Neitz, Jay; Neitz, Maureen

    2011-01-01

    The contributions of genetics research to the science of normal and defective color vision over the previous few decades are reviewed emphasizing the developments in the 25 years since the last anniversary issue of Vision Research. Understanding of the biology underlying color vision has been vaulted forward through the application of the tools of molecular genetics. For all their complexity, the biological processes responsible for color vision are more accessible than for many other neural systems. This is partly because of the wealth of genetic variations that affect color perception, both within and across species, and because components of the color vision system lend themselves to genetic manipulation. Mutations and rearrangements in the genes encoding the long, middle, and short wavelength sensitive cone pigments are responsible for color vision deficiencies and mutations have been identified that affect the number of cone types, the absorption spectrum of the pigments, the functionality and viability of the cones, and the topography of the cone mosaic. The addition of an opsin gene, as occurred in the evolution of primate color vision, and has been done in experimental animals can produce expanded color vision capacities and this has provided insight into the underlying neural circuitry. PMID:21167193

  1. Data Fusion for a Vision-Radiological System for Source Tracking and Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Andreas; Koppal, Sanjeev

    2015-07-01

    A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for themore » purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system algorithms are used and demonstrated in various laboratory scenarios, and later in realistic tracking scenarios. The selection and testing of radiological and computer-vision sensors for the additional specific scenarios will be the subject of ongoing and future work. (authors)« less

  2. Machine vision system for online inspection of freshly slaughtered chickens

    USDA-ARS?s Scientific Manuscript database

    A machine vision system was developed and evaluated for the automation of online inspection to differentiate freshly slaughtered wholesome chickens from systemically diseased chickens. The system consisted of an electron-multiplying charge-coupled-device camera used with an imaging spectrograph and ...

  3. [Are Visual Field Defects Reversible? - Visual Rehabilitation with Brains].

    PubMed

    Sabel, B A

    2017-02-01

    Visual field defects are considered irreversible because the retina and optic nerve do not regenerate. Nevertheless, there is some potential for recovery of the visual fields. This can be accomplished by the brain, which analyses and interprets visual information and is able to amplify residual signals through neuroplasticity. Neuroplasticity refers to the ability of the brain to change its own functional architecture by modulating synaptic efficacy. This is actually the neurobiological basis of normal learning. Plasticity is maintained throughout life and can be induced by repetitively stimulating (training) brain circuits. The question now arises as to how plasticity can be utilised to activate residual vision for the treatment of visual field loss. Just as in neurorehabilitation, visual field defects can be modulated by post-lesion plasticity to improve vision in glaucoma, diabetic retinopathy or optic neuropathy. Because almost all patients have some residual vision, the goal is to strengthen residual capacities by enhancing synaptic efficacy. New treatment paradigms have been tested in clinical studies, including vision restoration training and non-invasive alternating current stimulation. While vision training is a behavioural task to selectively stimulate "relative defects" with daily vision exercises for the duration of 6 months, treatment with alternating current stimulation (30 min. daily for 10 days) activates and synchronises the entire retina and brain. Though full restoration of vision is not possible, such treatments improve vision, both subjectively and objectively. This includes visual field enlargements, improved acuity and reaction time, improved orientation and vision related quality of life. About 70 % of the patients respond to the therapies and there are no serious adverse events. Physiological studies of the effect of alternating current stimulation using EEG and fMRI reveal massive local and global changes in the brain. These include local activation of the visual cortex and global reorganisation of neuronal brain networks. Because modulation of neuroplasticity can strengthen residual vision, the brain deserves a better reputation in ophthalmology for its role in visual rehabilitation. For patients, there is now more light at the end of the tunnel, because vision loss in some areas of the visual field defect is indeed reversible. Georg Thieme Verlag KG Stuttgart · New York.

  4. Trauma-Informed Part C Early Intervention: A Vision, A Challenge, A New Reality

    ERIC Educational Resources Information Center

    Gilkerson, Linda; Graham, Mimi; Harris, Deborah; Oser, Cindy; Clarke, Jane; Hairston-Fuller, Tody C.; Lertora, Jessica

    2013-01-01

    Federal directives require that any child less than 3 years old with a substantiated case of abuse be referred to the early intervention (EI) system. This article details the need and presents a vision for a trauma-informed EI system. The authors describe two exemplary program models which implement this vision and recommend steps which the field…

  5. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  6. Flight Testing an Integrated Synthetic Vision System

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  7. A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1993-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the quality of information obtained from the sensors, and complementary usage of the sensors and redundant capabilities.

  8. Neuromorphic vision sensors and preprocessors in system applications

    NASA Astrophysics Data System (ADS)

    Kramer, Joerg; Indiveri, Giacomo

    1998-09-01

    A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.

  9. Enhanced Vision for All-Weather Operations Under NextGen

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.

    2010-01-01

    Recent research in Synthetic/Enhanced Vision technology is analyzed with respect to existing Category II/III performance and certification guidance. The goal is to start the development of performance-based vision systems technology requirements to support future all-weather operations and the NextGen goal of Equivalent Visual Operations. This work shows that existing criteria to operate in Category III weather and visibility are not directly applicable since, unlike today, the primary reference for maneuvering the airplane is based on what the pilot sees visually through the "vision system." New criteria are consequently needed. Several possible criteria are discussed, but more importantly, the factors associated with landing system performance using automatic and manual landings are delineated.

  10. On-road vehicle detection: a review.

    PubMed

    Sun, Zehang; Bebis, George; Miller, Ronald

    2006-05-01

    Developing on-board automotive driver assistance systems aiming to alert drivers about driving environments, and possible collision with other vehicles has attracted a lot of attention lately. In these systems, robust and reliable vehicle detection is a critical step. This paper presents a review of recent vision-based on-road vehicle detection systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection. Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for vehicle detection. Finally, we present a critical overview of the methods discussed, we assess their potential for future deployment, and we present directions for future research.

  11. Universal design of a microcontroller and IoT system to detect the heart rate

    NASA Astrophysics Data System (ADS)

    Uwamahoro, Raphael; Mushikiwabeza, Alexie; Minani, Gerard; Mohan Murari, Bhaskar

    2017-11-01

    Heart rate analysis provides vital information of the present condition of the human body. It helps medical professionals in diagnosis of various malfunctions of the body. The limitation of vision impaired and blind people to access medical devices cause a considerable loss of life. In this paper, we intended to develop a heart rate detection system that is usable for people with normal and abnormal vision. The system is based on a non-invasive method of measuring the variation of the tissue blood flow rate by means of a photo transmitter and detector through fingertip known as photoplethysmography (PPG). The signal detected is firstly passed through active low pass filter and then amplified by a two stages high gain amplifier. The amplified signal is feed into the microcontroller to calculate the heart rate and displays the heart beat via sound systems and Liquid Crystal Display (LCD). To distinguish arrhythmia, normal heart rate and abnormal working conditions of the system, recognition is provided in different sounds, LCD readings and Light Emitting Diodes (LED).

  12. Quality of life and near vision impairment due to functional presbyopia among rural Chinese adults.

    PubMed

    Lu, Qing; Congdon, Nathan; He, Xiangdong; Murthy, Gudlavalleti V S; Yang, Amy; He, Wei

    2011-06-13

    To evaluate the impact of near-vision impairment on visual functioning and quality of life in a rural adult population in Shenyang, northern China. A population-based, cross-sectional study was conducted among persons aged 40+ years, during which functional presbyopia (correctable presenting near vision < 20/50 [N8] at 40 cm) was assessed. Near-vision-related quality of life and spectacle usage questionnaires were administered by trained interviewers to determine the degree of self-rated difficulty with near tasks. A total of 1008 respondents (91.5% of 1102 eligible persons) were examined, and 776 (78%) of completed the questionnaires (mean age, 57.0 ± 10.2 years; 63.3% women). Near-vision spectacle wearers obtained their spectacles primarily from markets (74.5%) and optical shops (21.7%), and only 1.14% from eye clinics. Among 538 (69.3%) persons with functional presbyopia, self-rated overall (distance and near) vision was worse (P < 0.001) and difficulty with activities of daily living greater (P < 0.001) than among nonpresbyopes. Odds of reporting any difficulty with daily tasks remained higher (OR = 2.32; P < 0.001) for presbyopes after adjustment for age, sex, education and distance vision. Compared to persons without presbyopia, presbyopic persons were more likely to report diminished accomplishment due to vision (P = 0.01, adjusted for age, sex, education, and distance vision.) Difficulties with activities of daily living and resulting social impediments are common due to presbyopia in this setting. Most spectacle wearers with presbyopia in rural China obtain near correction from sources that do not provide comprehensive vision care.

  13. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  14. Biofeedback for Better Vision

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Biofeedtrac, Inc.'s Accommotrac Vision Trainer, invented by Dr. Joseph Trachtman, is based on vision research performed by Ames Research Center and a special optometer developed for the Ames program by Stanford Research Institute. In the United States, about 150 million people are myopes (nearsighted), who tend to overfocus when they look at distant objects causing blurry distant vision, or hyperopes (farsighted), whose vision blurs when they look at close objects because they tend to underfocus. The Accommotrac system is an optical/electronic system used by a doctor as an aid in teaching a patient how to contract and relax the ciliary body, the focusing muscle. The key is biofeedback, wherein the patient learns to control a bodily process or function he is not normally aware of. Trachtman claims a 90 percent success rate for correcting, improving or stopping focusing problems. The Vision Trainer has also proved effective in treating other eye problems such as eye oscillation, cross eyes, and lazy eye and in professional sports to improve athletes' peripheral vision and reaction time.

  15. Office of Biological and Physical Research: Overview Transitioning to the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Crouch, Roger

    2004-01-01

    Viewgraphs on NASA's transition to its vision for space exploration is presented. The topics include: 1) Strategic Directives Guiding the Human Support Technology Program; 2) Progressive Capabilities; 3) A Journey to Inspire, Innovate, and Discover; 4) Risk Mitigation Status Technology Readiness Level (TRL) and Countermeasures Readiness Level (CRL); 5) Biological And Physical Research Enterprise Aligning With The Vision For U.S. Space Exploration; 6) Critical Path Roadmap Reference Missions; 7) Rating Risks; 8) Current Critical Path Roadmap (Draft) Rating Risks: Human Health; 9) Current Critical Path Roadmap (Draft) Rating Risks: System Performance/Efficiency; 10) Biological And Physical Research Enterprise Efforts to Align With Vision For U.S. Space Exploration; 11) Aligning with the Vision: Exploration Research Areas of Emphasis; 12) Code U Efforts To Align With The Vision For U.S. Space Exploration; 13) Types of Critical Path Roadmap Risks; and 14) ISS Human Support Systems Research, Development, and Demonstration. A summary discussing the vision for U.S. space exploration is also provided.

  16. Multi-modal low cost mobile indoor surveillance system on the Robust Artificial Intelligence-based Defense Electro Robot (RAIDER)

    NASA Astrophysics Data System (ADS)

    Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.

  17. Homework system development with the intention of supporting Saudi Arabia's vision 2030

    NASA Astrophysics Data System (ADS)

    Elgimari, Atifa; Alshahrani, Shafya; Al-shehri, Amal

    2017-10-01

    This paper suggests a web-based homework system. The suggested homework system can serve targeted students with ages of 7-11 years old. By using the suggested homework system, hard copies of homeworks were replaced by soft copies. Parents were involved in the education process electronically. It is expected to participate in applying Saudi Arabia's Vision 2030, specially in the education sector, where it considers the primary education is its foundation stone, as the success of the Vision depends in large assess on reforms in the education system generating a better basis for employment of young Saudis.

  18. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  19. How to assess vision.

    PubMed

    Marsden, Janet

    2016-09-21

    Rationale and key points An objective assessment of the patient's vision is important to assess variation from 'normal' vision in acute and community settings, to establish a baseline before examination and treatment in the emergency department, and to assess any changes during ophthalmic outpatient appointments. » Vision is one of the essential senses that permits people to make sense of the world. » Visual assessment does not only involve measuring central visual acuity, it also involves assessing the consequences of reduced vision. » Assessment of vision in children is crucial to identify issues that might affect vision and visual development, and to optimise lifelong vision. » Untreatable loss of vision is not an inevitable consequence of ageing. » Timely and repeated assessment of vision over life can reduce the incidence of falls, prevent injury and optimise independence. Reflective activity 'How to' articles can help update you practice and ensure it remains evidence based. Apply this article to your practice. Reflect on and write a short account of: 1. How this article might change your practice when assessing people holistically. 2. How you could use this article to educate your colleagues in the assessment of vision.

  20. Design of an NF-kB Activation-Coupled Apoptotic Molecule for Prostate Cancer Therapy

    DTIC Science & Technology

    2008-07-31

    p65-LS) hetero-dimer. We used this immunocomplex for caspase activity assay using a colorimetric caspase activity assay kit ( Biovision ). The...by a Caspase-3 colorimetric assay kit ( BioVision ). The purified Caspase-3 (10 ng) was used as a positive control in the assay. As shown in Figure...caspase-3 activity assay with a caspase-3 activity assay kit ( BioVision ). The activity of caspase-3 is in an arbitrary unit. 16 c), co-expressed

  1. Microwave vision for robots

    NASA Technical Reports Server (NTRS)

    Lewandowski, Leon; Struckman, Keith

    1994-01-01

    Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.

  2. Vision and Oral Health Needs of Individuals with Intellectual Disability

    ERIC Educational Resources Information Center

    Owens, Pamela L.; Kerker, Bonnie D.; Zigler, Edward; Horwitz, Sarah M.

    2006-01-01

    Over the past 20 years, there has been an increased emphasis on health promotion, including prevention activities related to vision and oral health, for the general population, but not for individuals with intellectual disability (ID). This review explores what is known about the prevalence of vision problems and oral health conditions among…

  3. Teacher Activism: Enacting a Vision for Social Justice

    ERIC Educational Resources Information Center

    Picower, Bree

    2012-01-01

    This qualitative study focused on educators who participated in grassroots social justice groups to explore the role teacher activism can play in the struggle for educational justice. Findings show teacher activists made three overarching commitments: to reconcile their vision for justice with the realities of injustice around them; to work within…

  4. The Role of Stefin A in Breast Metastasis

    DTIC Science & Technology

    2006-07-01

    buffer ( BioVision ) and protein concentrations determined by Bradford assay. Lysates ontaining 50 mg protein were added to cathepsin B, L, and S...activity assays utilizing fluorogenic substrates for etection of activity ( BioVision ) (B-D). ** Indicates P values of ɘ.01 between 67NR and 4T1.2 primary

  5. The crowding factor method applied to parafoveal vision

    PubMed Central

    Ghahghaei, Saeideh; Walker, Laura

    2016-01-01

    Crowding increases with eccentricity and is most readily observed in the periphery. During natural, active vision, however, central vision plays an important role. Measures of critical distance to estimate crowding are difficult in central vision, as these distances are small. Any overlap of flankers with the target may create an overlay masking confound. The crowding factor method avoids this issue by simultaneously modulating target size and flanker distance and using a ratio to compare crowded to uncrowded conditions. This method was developed and applied in the periphery (Petrov & Meleshkevich, 2011b). In this work, we apply the method to characterize crowding in parafoveal vision (<3.5 visual degrees) with spatial uncertainty. We find that eccentricity and hemifield have less impact on crowding than in the periphery, yet radial/tangential asymmetries are clearly preserved. There are considerable idiosyncratic differences observed between participants. The crowding factor method provides a powerful tool for examining crowding in central and peripheral vision, which will be useful in future studies that seek to understand visual processing under natural, active viewing conditions. PMID:27690170

  6. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  7. Vision function testing for a suprachoroidal retinal prosthesis: effects of image filtering

    NASA Astrophysics Data System (ADS)

    Barnes, Nick; Scott, Adele F.; Lieby, Paulette; Petoe, Matthew A.; McCarthy, Chris; Stacey, Ashley; Ayton, Lauren N.; Sinclair, Nicholas C.; Shivdasani, Mohit N.; Lovell, Nigel H.; McDermott, Hugh J.; Walker, Janine G.; BVA Consortium,the

    2016-06-01

    Objective. One strategy to improve the effectiveness of prosthetic vision devices is to process incoming images to ensure that key information can be perceived by the user. This paper presents the first comprehensive results of vision function testing for a suprachoroidal retinal prosthetic device utilizing of 20 stimulating electrodes. Further, we investigate whether using image filtering can improve results on a light localization task for implanted participants compared to minimal vision processing. No controlled implanted participant studies have yet investigated whether vision processing methods that are not task-specific can lead to improved results. Approach. Three participants with profound vision loss from retinitis pigmentosa were implanted with a suprachoroidal retinal prosthesis. All three completed multiple trials of a light localization test, and one participant completed multiple trials of acuity tests. The visual representations used were: Lanczos2 (a high quality Nyquist bandlimited downsampling filter); minimal vision processing (MVP); wide view regional averaging filtering (WV); scrambled; and, system off. Main results. Using Lanczos2, all three participants successfully completed a light localization task and obtained a significantly higher percentage of correct responses than using MVP (p≤slant 0.025) or with system off (p\\lt 0.0001). Further, in a preliminary result using Lanczos2, one participant successfully completed grating acuity and Landolt C tasks, and showed significantly better performance (p=0.004) compared to WV, scrambled and system off on the grating acuity task. Significance. Participants successfully completed vision tasks using a 20 electrode suprachoroidal retinal prosthesis. Vision processing with a Nyquist bandlimited image filter has shown an advantage for a light localization task. This result suggests that this and targeted, more advanced vision processing schemes may become important components of retinal prostheses to enhance performance. ClinicalTrials.gov Identifier: NCT01603576.

  8. Research into the Architecture of CAD Based Robot Vision Systems

    DTIC Science & Technology

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  9. Vision, Leadership, and Change: The Case of Ramah Summer Camps

    ERIC Educational Resources Information Center

    Reimer, Joseph

    2010-01-01

    In his retrospective essay, Seymour Fox (1997) identified "vision" as the essential element that shaped the Ramah camp system. I will take a critical look at Fox's main claims: (1) A particular model of vision was essential to the development of Camp Ramah; and (2) That model of vision should guide contemporary Jewish educators in creating Jewish…

  10. NASA's First Year Progress with Fuel Cell Advanced Development in Support of the Exploration Vision

    NASA Technical Reports Server (NTRS)

    Hoberecht, Mark

    2007-01-01

    NASA Glenn Research Center (GRC), in collaboration with Johnson Space Center (JSC), the Jet Propulsion Laboratory (JPL), Kennedy Space Center (KSC), and industry partners, is leading a proton-exchange-membrane fuel cell (PEMFC) advanced development effort to support the vision for Exploration. This effort encompasses the fuel cell portion of the Energy Storage Project under the Exploration Technology Development Program, and is directed at multiple power levels for both primary and regenerative fuel cell systems. The major emphasis is the replacement of active mechanical ancillary components with passive components in order to reduce mass and parasitic power requirements, and to improve system reliability. A dual approach directed at both flow-through and non flow-through PEMFC system technologies is underway. A brief overview of the overall PEMFC project and its constituent tasks will be presented, along with in-depth technical accomplishments for the past year. Future potential technology development paths will also be discussed.

  11. Image segmentation for enhancing symbol recognition in prosthetic vision.

    PubMed

    Horne, Lachlan; Barnes, Nick; McCarthy, Chris; He, Xuming

    2012-01-01

    Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from poor resolution and dynamic range of induced phosphenes. This can make it difficult for users of prosthetic vision systems to identify symbolic information (such as signs) except in controlled conditions. Using image segmentation techniques from computer vision, we show it is possible to improve the clarity of such symbolic information for users of prosthetic vision implants in uncontrolled conditions. We use image segmentation to automatically divide a natural image into regions, and using a fixation point controlled by the user, select a region to phosphenize. This technique improves the apparent contrast and clarity of symbolic information over traditional phosphenization approaches.

  12. The Naturoptic Method for Safe Recovery of Vision: Mentored Tutoring, Earnings, Academic Entity Financial Resources Tool

    NASA Astrophysics Data System (ADS)

    Sambursky, Nicole D.; McLeod, Roger David; Silva, Sandra Helena

    2009-05-01

    This is a novel method for safely and naturally improving vision. with applications for minority, female, and academic entity, financial advantages. The patented Naturoptic Method is a simple system designed to work quickly, requiring only a minimal number of sessions for improvement. Our mentored and unique activities investigated these claims by implementing the Naturoptic method on ourselves over a period of time. Research was conducted at off campus locations with the inventor of the Naturoptic Method. Initial visual acuity and subsequent progress is self assessed, using standard Snellen Eye Charts. Research is designed to document improvements in vision with successive uses of the Naturoptic Method, as mentored teachers or Awardees of ``The Kaan Balam Matagamon Memorial Award,'' with net earnings shared by the designees, academic entities, the American Indians in Science and Engineering Society, AISES, or charity. The Board requires Awardees, its students, or affiliates, to sign non-disclosure agreements. 185/1526

  13. Progress in high-level exploratory vision

    NASA Astrophysics Data System (ADS)

    Brand, Matthew

    1993-08-01

    We have been exploring the hypothesis that vision is an explanatory process, in which causal and functional reasoning about potential motion plays an intimate role in mediating the activity of low-level visual processes. In particular, we have explored two of the consequences of this view for the construction of purposeful vision systems: Causal and design knowledge can be used to (1) drive focus of attention, and (2) choose between ambiguous image interpretations. An important result of visual understanding is an explanation of the scene's causal structure: How action is originated, constrained, and prevented, and what will happen in the immediate future. In everyday visual experience, most action takes the form of motion, and most causal analysis takes the form of dynamical analysis. This is even true of static scenes, where much of a scene's interest lies in how possible motions are arrested. This paper describes our progress in developing domain theories and visual processes for the understanding of various kinds of structured scenes, including structures built out of children's constructive toys and simple mechanical devices.

  14. Disparity channels in early vision

    PubMed Central

    Roe, AW; Parker, AJ; Born, RT; DeAngelis, GC

    2008-01-01

    The last decade has seen a dramatic increase in our knowledge of the neural basis of stereopsis. New cortical areas have been found to represent binocular disparities, new representations of disparity information (e.g., relative disparity signals) have been uncovered, the first topographic maps of disparity have been measured, and the first causal links between neural activity and depth perception have been established. Equally exciting is the finding that training and experience affects how signals are channeled through different brain areas, a flexibility that may be crucial for learning, plasticity, and recovery of function. The collective efforts of several laboratories have established stereo vision as one of the most productive model systems for elucidating the neural basis of perception. Much remains to be learned about how the disparity signals that are initially encoded in primary visual cortex are routed to and processed by extrastriate areas to mediate the diverse capacities of 3D vision that enhance our daily experience of the world. PMID:17978018

  15. 77 FR 21861 - Special Conditions: Boeing, Model 777F; Enhanced Flight Vision System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... System AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final special conditions; request for... with an advanced, enhanced flight vision system (EFVS). The EFVS consists of a head-up display (HUD) system modified to display forward-looking infrared (FLIR) imagery. The applicable airworthiness...

  16. Evaluating the Effects of Dimensionality in Advanced Avionic Display Concepts for Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Alexander, Amy L.; Prinzel, Lawrence J., III; Wickens, Christopher D.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.

    2007-01-01

    Synthetic vision systems provide an in-cockpit view of terrain and other hazards via a computer-generated display representation. Two experiments examined several display concepts for synthetic vision and evaluated how such displays modulate pilot performance. Experiment 1 (24 general aviation pilots) compared three navigational display (ND) concepts: 2D coplanar, 3D, and split-screen. Experiment 2 (12 commercial airline pilots) evaluated baseline 'blue sky/brown ground' or synthetic vision-enabled primary flight displays (PFDs) and three ND concepts: 2D coplanar with and without synthetic vision and a dynamic multi-mode rotatable exocentric format. In general, the results pointed to an overall advantage for a split-screen format, whether it be stand-alone (Experiment 1) or available via rotatable viewpoints (Experiment 2). Furthermore, Experiment 2 revealed benefits associated with utilizing synthetic vision in both the PFD and ND representations and the value of combined ego- and exocentric presentations.

  17. Panoramic stereo sphere vision

    NASA Astrophysics Data System (ADS)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  18. Exploration of available feature detection and identification systems and their performance on radiographs

    NASA Astrophysics Data System (ADS)

    Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.

    2016-10-01

    Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.

  19. Enhanced operator perception through 3D vision and haptic feedback

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  20. A Vision in Jeopardy: Royal Navy Maritime Autonomous Systems (MAS)

    DTIC Science & Technology

    2017-03-31

    Chapter 6 will propose a new MAS vision for the RN. However, before doing so, a fresh look at the problem is required. Consensus of the Problem, Not the... assessment , the RN has failed to deliver any sustainable MAS operational capability. A vision for MAS finally materialized in 2014. Yet, the vision...continuous investment and assessment , the RN has failed to deliver any sustainable MAS operational capability. A vision for MAS finally materialized in

  1. Robot path planning using expert systems and machine vision

    NASA Astrophysics Data System (ADS)

    Malone, Denis E.; Friedrich, Werner E.

    1992-02-01

    This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.

  2. A Computer-Based System Integrating Instruction and Information Retrieval: A Description of Some Methodological Considerations.

    ERIC Educational Resources Information Center

    Selig, Judith A.; And Others

    This report, summarizing the activities of the Vision Information Center (VIC) in the field of computer-assisted instruction from December, 1966 to August, 1967, describes the methodology used to load a large body of information--a programed text on basic opthalmology--onto a computer for subsequent information retrieval and computer-assisted…

  3. Role of the School Psychologist: Orchestrating the Continuum of School-Wide Positive Behavior Support

    ERIC Educational Resources Information Center

    McGraw, Kelly; Koonce, Danel A.

    2011-01-01

    The "Blueprint for Training and Practice III" (Blueprint III; Ysseldyke et al., 2006), attempts to pinpoint the vision for the field of school psychology through highlighting school psychologists' role as consultants engaged in activities ranging from individual to systems-level change. Although the literature is replete with calls to restructure…

  4. A proposed intracortical visual prosthesis image processing system.

    PubMed

    Srivastava, N R; Troyk, P

    2005-01-01

    It has been a goal of neuroprosthesis researchers to develop a system, which could provide artifical vision to a large population of individuals with blindness. It has been demonstrated by earlier researches that stimulating the visual cortex area electrically can evoke spatial visual percepts, i.e. phosphenes. The goal of visual cortex prosthesis is to stimulate the visual cortex area and generate a visual perception in real time to restore vision. Even though the normal working of the visual system is not been completely understood, the existing knowledge has inspired research groups to develop strategies to develop visual cortex prosthesis which can help blind patients in their daily activities. A major limitation in this work is the development of an image proceessing system for converting an electronic image, as captured by a camera, into a real-time data stream for stimulation of the implanted electrodes. This paper proposes a system, which will capture the image using a camera and use a dedicated hardware real time image processor to deliver electrical pulses to intracortical electrodes. This system has to be flexible enough to adapt to individual patients and to various strategies of image reconstruction. Here we consider a preliminary architecture for this system.

  5. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase

    PubMed Central

    Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling

    2015-01-01

    In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate. PMID:26378533

  6. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase.

    PubMed

    Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling

    2015-09-10

    In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate.

  7. Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review

    PubMed Central

    Ali, Asraf; Sundaraj, Kenneth; Ahmad, Badlishah; Ahamed, Nizam; Islam, Anamul

    2012-01-01

    Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders. PMID:22938548

  8. The genetics of normal and defective color vision.

    PubMed

    Neitz, Jay; Neitz, Maureen

    2011-04-13

    The contributions of genetics research to the science of normal and defective color vision over the previous few decades are reviewed emphasizing the developments in the 25years since the last anniversary issue of Vision Research. Understanding of the biology underlying color vision has been vaulted forward through the application of the tools of molecular genetics. For all their complexity, the biological processes responsible for color vision are more accessible than for many other neural systems. This is partly because of the wealth of genetic variations that affect color perception, both within and across species, and because components of the color vision system lend themselves to genetic manipulation. Mutations and rearrangements in the genes encoding the long, middle, and short wavelength sensitive cone pigments are responsible for color vision deficiencies and mutations have been identified that affect the number of cone types, the absorption spectra of the pigments, the functionality and viability of the cones, and the topography of the cone mosaic. The addition of an opsin gene, as occurred in the evolution of primate color vision, and has been done in experimental animals can produce expanded color vision capacities and this has provided insight into the underlying neural circuitry. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Acquired color vision deficiency.

    PubMed

    Simunovic, Matthew P

    2016-01-01

    Acquired color vision deficiency occurs as the result of ocular, neurologic, or systemic disease. A wide array of conditions may affect color vision, ranging from diseases of the ocular media through to pathology of the visual cortex. Traditionally, acquired color vision deficiency is considered a separate entity from congenital color vision deficiency, although emerging clinical and molecular genetic data would suggest a degree of overlap. We review the pathophysiology of acquired color vision deficiency, the data on its prevalence, theories for the preponderance of acquired S-mechanism (or tritan) deficiency, and discuss tests of color vision. We also briefly review the types of color vision deficiencies encountered in ocular disease, with an emphasis placed on larger or more detailed clinical investigations. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. A vision-based end-point control for a two-link flexible manipulator. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Obergfell, Klaus

    1991-01-01

    The measurement and control of the end-effector position of a large two-link flexible manipulator are investigated. The system implementation is described and an initial algorithm for static end-point positioning is discussed. Most existing robots are controlled through independent joint controllers, while the end-effector position is estimated from the joint positions using a kinematic relation. End-point position feedback can be used to compensate for uncertainty and structural deflections. Such feedback is especially important for flexible robots. Computer vision is utilized to obtain end-point position measurements. A look-and-move control structure alleviates the disadvantages of the slow and variable computer vision sampling frequency. This control structure consists of an inner joint-based loop and an outer vision-based loop. A static positioning algorithm was implemented and experimentally verified. This algorithm utilizes the manipulator Jacobian to transform a tip position error to a joint error. The joint error is then used to give a new reference input to the joint controller. The convergence of the algorithm is demonstrated experimentally under payload variation. A Landmark Tracking System (Dickerson, et al 1990) is used for vision-based end-point measurements. This system was modified and tested. A real-time control system was implemented on a PC and interfaced with the vision system and the robot.

  11. The EnVision++ system: a new immunohistochemical method for diagnostics and research. Critical comparison with the APAAP, ChemMate, CSA, LABC, and SABC techniques.

    PubMed

    Sabattini, E; Bisgaard, K; Ascani, S; Poggi, S; Piccioli, M; Ceccarelli, C; Pieri, F; Fraternali-Orcioni, G; Pileri, S A

    1998-07-01

    To assess a newly developed immunohistochemical detection system, the EnVision++. A large series of differently processed normal and pathological samples and 53 relevant monoclonal antibodies were chosen. A chessboard titration assay was used to compare the results provided by the EnVision++ system with those of the APAAP, CSA, LSAB, SABC, and ChemMate methods, when applied either manually or in a TechMate 500 immunostainer. With the vast majority of the antibodies, EnVision++ allowed two- to fivefold higher dilutions than the APAAP, LSAB, SABC, and ChemMate techniques, the staining intensity and percentage of expected positive cells being the same. With some critical antibodies (such as the anti-CD5), it turned out to be superior in that it achieved consistently reproducible results with differently fixed or overfixed samples. Only the CSA method, which includes tyramide based enhancement, allowed the same dilutions as the EnVision++ system, and in one instance (with the anti-cyclin D1 antibody) represented the gold standard. The EnVision++ is an easy to use system, which avoids the possibility of disturbing endogenous biotin and lowers the cost per test by increasing the dilutions of the primary antibodies. Being a two step procedure, it reduces both the assay time and the workload.

  12. The EnVision++ system: a new immunohistochemical method for diagnostics and research. Critical comparison with the APAAP, ChemMate, CSA, LABC, and SABC techniques.

    PubMed Central

    Sabattini, E; Bisgaard, K; Ascani, S; Poggi, S; Piccioli, M; Ceccarelli, C; Pieri, F; Fraternali-Orcioni, G; Pileri, S A

    1998-01-01

    AIM: To assess a newly developed immunohistochemical detection system, the EnVision++. METHODS: A large series of differently processed normal and pathological samples and 53 relevant monoclonal antibodies were chosen. A chessboard titration assay was used to compare the results provided by the EnVision++ system with those of the APAAP, CSA, LSAB, SABC, and ChemMate methods, when applied either manually or in a TechMate 500 immunostainer. RESULTS: With the vast majority of the antibodies, EnVision++ allowed two- to fivefold higher dilutions than the APAAP, LSAB, SABC, and ChemMate techniques, the staining intensity and percentage of expected positive cells being the same. With some critical antibodies (such as the anti-CD5), it turned out to be superior in that it achieved consistently reproducible results with differently fixed or overfixed samples. Only the CSA method, which includes tyramide based enhancement, allowed the same dilutions as the EnVision++ system, and in one instance (with the anti-cyclin D1 antibody) represented the gold standard. CONCLUSIONS: The EnVision++ is an easy to use system, which avoids the possibility of disturbing endogenous biotin and lowers the cost per test by increasing the dilutions of the primary antibodies. Being a two step procedure, it reduces both the assay time and the workload. Images PMID:9797726

  13. A lightweight, inexpensive robotic system for insect vision.

    PubMed

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Implementation of a robotic flexible assembly system

    NASA Technical Reports Server (NTRS)

    Benton, Ronald C.

    1987-01-01

    As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.

  15. Protyping machine vision software on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Karantalis, George; Batchelor, Bruce G.

    1998-10-01

    Interactive image processing is a proven technique for analyzing industrial vision applications and building prototype systems. Several of the previous implementations have used dedicated hardware to perform the image processing, with a top layer of software providing a convenient user interface. More recently, self-contained software packages have been devised and these run on a standard computer. The advent of the Java programming language has made it possible to write platform-independent software, operating over the Internet, or a company-wide Intranet. Thus, there arises the possibility of designing at least some shop-floor inspection/control systems, without the vision engineer ever entering the factories where they will be used. It successful, this project will have a major impact on the productivity of vision systems designers.

  16. Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking

    NASA Astrophysics Data System (ADS)

    Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas

    2018-01-01

    The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.

  17. Multiscale Enaction Model (MEM): the case of complexity and “context-sensitivity” in vision

    PubMed Central

    Laurent, Éric

    2014-01-01

    I review the data on human visual perception that reveal the critical role played by non-visual contextual factors influencing visual activity. The global perspective that progressively emerges reveals that vision is sensitive to multiple couplings with other systems whose nature and levels of abstraction in science are highly variable. Contrary to some views where vision is immersed in modular hard-wired modules, rather independent from higher-level or other non-cognitive processes, converging data gathered in this article suggest that visual perception can be theorized in the larger context of biological, physical, and social systems with which it is coupled, and through which it is enacted. Therefore, any attempt to model complexity and multiscale couplings, or to develop a complex synthesis in the fields of mind, brain, and behavior, shall involve a systematic empirical study of both connectedness between systems or subsystems, and the embodied, multiscale and flexible teleology of subsystems. The conceptual model (Multiscale Enaction Model [MEM]) that is introduced in this paper finally relates empirical evidence gathered from psychology to biocomputational data concerning the human brain. Both psychological and biocomputational descriptions of MEM are proposed in order to help fill in the gap between scales of scientific analysis and to provide an account for both the autopoiesis-driven search for information, and emerging perception. PMID:25566115

  18. Test of Lander Vision System for Mars 2020

    NASA Image and Video Library

    2016-10-04

    A prototype of the Lander Vision System for NASA Mars 2020 mission was tested in this Dec. 9, 2014, flight of a Masten Space Systems Xombie vehicle at Mojave Air and Space Port in California. http://photojournal.jpl.nasa.gov/catalog/PIA20848

  19. Traumatic brain injury and vestibulo-ocular function: current challenges and future prospects

    PubMed Central

    Wallace, Bridgett; Lifshitz, Jonathan

    2016-01-01

    Normal function of the vestibulo-ocular reflex (VOR) coordinates eye movement with head movement, in order to provide clear vision during motion and maintain balance. VOR is generated within the semicircular canals of the inner ear to elicit compensatory eye movements, which maintain stability of images on the fovea during brief, rapid head motion, otherwise known as gaze stability. Normal VOR function is necessary in carrying out activities of daily living (eg, walking and riding in a car) and is of particular importance in higher demand activities (eg, sports-related activities). Disruption or damage in the VOR can result in symptoms such as movement-related dizziness, blurry vision, difficulty maintaining balance with head movements, and even nausea. Dizziness is one of the most common symptoms following traumatic brain injury (TBI) and is considered a risk factor for a prolonged recovery. Assessment of the vestibular system is of particular importance following TBI, in conjunction with oculomotor control, due to the intrinsic neural circuitry that exists between the ocular and vestibular systems. The purpose of this article is to review the physiology of the VOR and the visual-vestibular symptoms associated with TBI and to discuss assessment and treatment guidelines for TBI. Current challenges and future prospects will also be addressed. PMID:28539811

  20. Driver's Enhanced Vision System (DEVS)

    DOT National Transportation Integrated Search

    1996-12-23

    This advisory circular (AC) contains performance standards, specifications, and : recommendations for Drivers Enhanced Vision sSystem (DEVS). The FAA recommends : the use of the guidance in this publication for the design and installation of : DEVS e...

  1. Vertically integrated photonic multichip module architecture for vision applications

    NASA Astrophysics Data System (ADS)

    Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong

    2000-05-01

    The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.

  2. Design of direct-vision cyclo-olefin-polymer double Amici prism for spectral imaging.

    PubMed

    Wang, Lei; Shao, Zhengzheng; Tang, Wusheng; Liu, Jiying; Nie, Qianwen; Jia, Hui; Dai, Suian; Zhu, Jubo; Li, Xiujian

    2017-10-20

    A direct-vision Amici prism is a desired dispersion element in the value of spectrometers and spectral imaging systems. In this paper, we focus on designing a direct-vision cyclo-olefin-polymer double Amici prism for spectral imaging systems. We illustrate a designed structure: E48R/N-SF4/E48R, from which we obtain 13 deg dispersion across the visible spectrum, which is equivalent to 700 line pairs/mm grating. We construct a simulative spectral imaging system with the designed direct-vision cyclo-olefin-polymer double Amici prism in optical design software and compare its imaging performance to a glass double Amici prism in the same system. The results of spot-size RMS demonstrate that the plastic prism can serve as well as their glass competitors and have better spectral resolution.

  3. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  4. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    PubMed

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  5. Visual impairment, visual functioning, and quality of life assessments in patients with glaucoma.

    PubMed Central

    Parrish, R K

    1996-01-01

    BACKGROUND/PURPOSE: To determine the relation between visual impairment, visual functioning, and the global quality of life in patients with glaucoma. METHODS: Visual impairment, defined with the American Medical Association Guides to the Evaluation of Permanent Impairment; visual functioning, measured with the VF-14 and the Field Test Version of the National Eye Institute-Visual Functioning Questionnaire (NEI-VFQ); and the global quality of life, assessed with the Medical Outcomes Study 36-Item Short Form Health Survey (SF-36), were determined in 147 consecutive patients with glaucoma. RESULTS: None of the SF-36 domains demonstrated more than a weak correlation with visual impairment. The VF-14 scores were moderately correlated with visual impairment. Of the twelve NEI-VFQ scales, distance activities and vision specific dependency were moderately correlated with visual impairment. Of the twelve NEI-VFQ scales, distance activities and vision specific dependency were moderately correlated with visual field impairment; vision specific social functioning, near activities, vision specific role difficulties, general vision, vision specific mental health, color vision, and driving were modestly correlated; visual pain was weakly correlated; and two were not significantly correlated. Correcting for visual actuity weakened the strength of the correlation coefficients. CONCLUSIONS: The SF-36 is unlikely to be useful in determining visual impairment in patients with glaucoma. Based on the moderate correlation between visual field impairment and the VF-14 score, this questionnaire may be generalizable to patients with glaucoma. Several of the NEI-VFQ scales correlate with visual field impairment scores in patients with a wide range of glaucomatous damage. PMID:8981717

  6. Machine vision for digital microfluidics

    NASA Astrophysics Data System (ADS)

    Shin, Yong-Jun; Lee, Jeong-Bong

    2010-01-01

    Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.

  7. m-Health: Lessons Learned by m-Experiences

    PubMed Central

    Bravo, José; Hervás, Ramón; González, Iván

    2018-01-01

    m-Health is an emerging area that is transforming how people take part in the control of their wellness condition. This vision is changing traditional health processes by discharging hospitals from the care of people. Important advantages of continuous monitoring can be reached but, in order to transform this vision into a reality, some factors need to be addressed. m-Health applications should be shared by patients and hospital staff to perform proper supervised health monitoring. Furthermore, the uses of smartphones for health purposes should be transformed to achieve the objectives of this vision. In this work, we analyze the m-Health features and lessons learned by the experiences of systems developed by MAmI Research Lab. We have focused on three main aspects: m-interaction, use of frameworks, and physical activity recognition. For the analysis of the previous aspects, we have developed some approaches to: (1) efficiently manage patient medical records for nursing and healthcare environments by introducing the NFC technology; (2) a framework to monitor vital signs, obesity and overweight levels, rehabilitation and frailty aspects by means of accelerometer-enabled smartphones and, finally; (3) a solution to analyze daily gait activity in the elderly, carrying a single inertial wearable close to the first thoracic vertebra. PMID:29762507

  8. A computer vision system for the recognition of trees in aerial photographs

    NASA Technical Reports Server (NTRS)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  9. Eye vision system using programmable micro-optics and micro-electronics

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.; Amin, M. Junaid; Riza, Mehdi N.

    2014-02-01

    Proposed is a novel eye vision system that combines the use of advanced micro-optic and microelectronic technologies that includes programmable micro-optic devices, pico-projectors, Radio Frequency (RF) and optical wireless communication and control links, energy harvesting and storage devices and remote wireless energy transfer capabilities. This portable light weight system can measure eye refractive powers, optimize light conditions for the eye under test, conduct color-blindness tests, and implement eye strain relief and eye muscle exercises via time sequenced imaging. Described is the basic design of the proposed system and its first stage system experimental results for vision spherical lens refractive error correction.

  10. Low Vision Enhancement System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  11. Two-Year Community: Implementing Vision and Change in a Community College Classroom

    ERIC Educational Resources Information Center

    Lysne, Steven; Miller, Brant

    2015-01-01

    The purpose of this article is to describe a model for teaching introductory biology coursework within the Vision and Change framework (American Association for the Advancement of Science, 2011). The intent of the new model is to transform instruction by adopting an active, student-centered, and inquiry-based pedagogy consistent with Vision and…

  12. The Lifestyles of Blind, Low Vision, and Sighted Youths: A Quantitative Comparison.

    ERIC Educational Resources Information Center

    Wolffe, K.; Sacks, S. Z.

    1997-01-01

    Analysis of interviews and time-diary protocols with 48 students (16 blind, 16 low-vision, and 16 sighted), ages 15-21, and their parents focused on four lifestyle areas: academic involvement and performance, daily living and personal care activities, recreation and leisure activities, and work and vocational experiences. Similarities and…

  13. Three spectrally distinct photoreceptors in diurnal and nocturnal Australian ants.

    PubMed

    Ogawa, Yuri; Falkowski, Marcin; Narendra, Ajay; Zeil, Jochen; Hemmi, Jan M

    2015-06-07

    Ants are thought to be special among Hymenopterans in having only dichromatic colour vision based on two spectrally distinct photoreceptors. Many ants are highly visual animals, however, and use vision extensively for navigation. We show here that two congeneric day- and night-active Australian ants have three spectrally distinct photoreceptor types, potentially supporting trichromatic colour vision. Electroretinogram recordings show the presence of three spectral sensitivities with peaks (λmax) at 370, 450 and 550 nm in the night-active Myrmecia vindex and peaks at 370, 470 and 510 nm in the day-active Myrmecia croslandi. Intracellular electrophysiology on individual photoreceptors confirmed that the night-active M. vindex has three spectral sensitivities with peaks (λmax) at 370, 430 and 550 nm. A large number of the intracellular recordings in the night-active M. vindex show unusually broad-band spectral sensitivities, suggesting that photoreceptors may be coupled. Spectral measurements at different temporal frequencies revealed that the ultraviolet receptors are comparatively slow. We discuss the adaptive significance and the probability of trichromacy in Myrmecia ants in the context of dim light vision and visual navigation. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  14. Ranibizumab Injection

    MedlinePlus

    ... activities). It is also used to treat macular edema after retinal vein occlusion (an eye disease caused ... to blurry vision and vision loss), diabetic macular edema (an eye disease caused by diabetes that can ...

  15. Aflibercept Injection

    MedlinePlus

    ... activities). It is also used to treat macular edema after retinal vein occlusion (an eye disease caused ... to blurry vision and vision loss), diabetic macular edema (an eye disease caused by diabetes that can ...

  16. Correlation based system to assess the completeness and correctness of cognitive stimulation activities of elders

    NASA Astrophysics Data System (ADS)

    González-Fraga, J. A.; Morán, A. L.; Meza-Kubo, V.; Tentori, M.; Santiago, E.

    2009-08-01

    During a cognitive stimulation session where elders with cognitive decline perform stimulation activities, such as solving puzzles, we observed that they require constant supervision and support from their caregivers, and caregivers must be able to monitor the stimulation activity of more than one patient at a time. In this paper, aiming at providing support for the caregiver, we developed a vision-based system using an Phase-SDF filter that generates a composite reference image which is correlated to a captured wooden-puzzle image. The output correlation value allows to automatically verify the progress on the puzzle solving task, and to assess its completeness and correctness.

  17. Automated intelligent video surveillance system for ships

    NASA Astrophysics Data System (ADS)

    Wei, Hai; Nguyen, Hieu; Ramu, Prakash; Raju, Chaitanya; Liu, Xiaoqing; Yadegar, Jacob

    2009-05-01

    To protect naval and commercial ships from attack by terrorists and pirates, it is important to have automatic surveillance systems able to detect, identify, track and alert the crew on small watercrafts that might pursue malicious intentions, while ruling out non-threat entities. Radar systems have limitations on the minimum detectable range and lack high-level classification power. In this paper, we present an innovative Automated Intelligent Video Surveillance System for Ships (AIVS3) as a vision-based solution for ship security. Capitalizing on advanced computer vision algorithms and practical machine learning methodologies, the developed AIVS3 is not only capable of efficiently and robustly detecting, classifying, and tracking various maritime targets, but also able to fuse heterogeneous target information to interpret scene activities, associate targets with levels of threat, and issue the corresponding alerts/recommendations to the man-in- the-loop (MITL). AIVS3 has been tested in various maritime scenarios and shown accurate and effective threat detection performance. By reducing the reliance on human eyes to monitor cluttered scenes, AIVS3 will save the manpower while increasing the accuracy in detection and identification of asymmetric attacks for ship protection.

  18. ODIS the under-vehicle inspection robot: development status update

    NASA Astrophysics Data System (ADS)

    Freiburger, Lonnie A.; Smuda, William; Karlsen, Robert E.; Lakshmanan, Sridhar; Ma, Bing

    2003-09-01

    Unmanned ground vehicle (UGV) technology can be used in a number of ways to assist in counter-terrorism activities. Robots can be employed for a host of terrorism deterrence and detection applications. As reported in last year's Aerosense conference, the U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) and Utah State University (USU) have developed a tele-operated robot called ODIS (Omnidirectional Inspection System) that is particularly effective in performing under-vehicle inspections at security checkpoints. ODIS' continuing development for this task is heavily influenced by feedback received from soldiers and civilian law enforcement personnel using ODIS-prototypes in an operational environment. Our goal is to convince civilian law enforcement and military police to replace the traditional "mirror on a stick" system of looking under cars for bombs and contraband with ODIS. This paper reports our efforts in the past one year in terms of optimizing ODIS for the visual inspection task. Of particular concern is the design of the vision system. This paper documents details on the various issues relating to ODIS' vision system - sensor, lighting, image processing, and display.

  19. Art, Illusion and the Visual System.

    ERIC Educational Resources Information Center

    Livingstone, Margaret S.

    1988-01-01

    Describes the three part system of human vision. Explores the anatomical arrangement of the vision system from the eyes to the brain. Traces the path of various visual signals to their interpretations by the brain. Discusses human visual perception and its implications in art and design. (CW)

  20. Industry's tireless eyes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-08-01

    This article reports that there are literally hundreds of machine vision systems from which to choose. They range in cost from $10,000 to $1,000,000. Most have been designed for specific applications; the same systems if used for a different application may fail dismally. How can you avoid wasting money on inferior, useless, or nonexpandable systems. A good reference is the Automated Vision Association in Ann Arbor, Mich., a trade group comprised of North American machine vision manufacturers. Reputable suppliers caution users to do their homework before making an investment. Important considerations include comprehensive details on the objects to be viewed-thatmore » is, quantity, shape, dimension, size, and configuration details; lighting characteristics and variations; component orientation details. Then, what do you expect the system to do-inspect, locate components, aid in robotic vision. Other criteria include system speed and related accuracy and reliability. What are the projected benefits and system paybacks.. Examine primarily paybacks associated with scrap and rework reduction as well as reduced warranty costs.« less

  1. Effects of V4c-ICL Implantation on Myopic Patients' Vision-Related Daily Activities

    PubMed Central

    Linghu, Shaorong; Pan, Le; Shi, Rong

    2016-01-01

    The new type implantable Collamer lens with a central hole (V4c-ICL) is widely used to treat myopia. However, halos occur in some patients after surgery. The aim is to evaluate the effect of V4c-ICL implantation on vision-related daily activities. This retrospective study included 42 patients. Uncorrected visual acuity (UCVA), best corrected visual acuity (BCVA), intraocular pressure (IOP), endothelial cell density (ECD), and vault were recorded and vision-related daily activities were evaluated at 3 months after operation. The average spherical equivalent was −0.12 ± 0.33 D at 3 months after operation. UCVA equal to or better than preoperative BCVA occurred in 98% of eyes. The average BCVA at 3 months after operation was −0.03 ± 0.07 LogMAR, which was significantly better than preoperative BCVA (0.08 ± 0.10 LogMAR) (P = 0.029). Apart from one patient (2.4%) who had difficulty reading computer screens, all patients had satisfactory or very satisfactory results. During the early postoperation, halos occurred in 23 patients (54.8%). However there were no significant differences in the scores of visual functions between patients with and without halos (P > 0.05). Patients were very satisfied with their vision-related daily activities at 3 months after operation. The central hole of V4c-ICL does not affect patients' vision-related daily activities. PMID:27965890

  2. Processing time of addition or withdrawal of single or combined balance-stabilizing haptic and visual information

    PubMed Central

    Honeine, Jean-Louis; Crisafulli, Oscar; Sozzi, Stefania

    2015-01-01

    We investigated the integration time of haptic and visual input and their interaction during stance stabilization. Eleven subjects performed four tandem-stance conditions (60 trials each). Vision, touch, and both vision and touch were added and withdrawn. Furthermore, vision was replaced with touch and vice versa. Body sway, tibialis anterior, and peroneus longus activity were measured. Following addition or withdrawal of vision or touch, an integration time period elapsed before the earliest changes in sway were observed. Thereafter, sway varied exponentially to a new steady-state while reweighting occurred. Latencies of sway changes on sensory addition ranged from 0.6 to 1.5 s across subjects, consistently longer for touch than vision, and were regularly preceded by changes in muscle activity. Addition of vision and touch simultaneously shortened the latencies with respect to vision or touch separately, suggesting cooperation between sensory modalities. Latencies following withdrawal of vision or touch or both simultaneously were shorter than following addition. When vision was replaced with touch or vice versa, adding one modality did not interfere with the effect of withdrawal of the other, suggesting that integration of withdrawal and addition were performed in parallel. The time course of the reweighting process to reach the new steady-state was also shorter on withdrawal than addition. The effects of different sensory inputs on posture stabilization illustrate the operation of a time-consuming, possibly supraspinal process that integrates and fuses modalities for accurate balance control. This study also shows the facilitatory interaction of visual and haptic inputs in integration and reweighting of stance-stabilizing inputs. PMID:26334013

  3. OLED study for military applications

    NASA Astrophysics Data System (ADS)

    Barre, F.; Chiquard, A.; Faure, S.; Landais, L.; Patry, P.

    2005-07-01

    The presentation deals with some applications of OLED displays in military optronic systems, which are scheduled by SAGEM DS (Defence and Security). SAGEM DS, one of the largest group in the defence and security market, is currently investigating OLED Technologies for military programs. This technology is close from being chosen for optronic equipment such as future infantry night vision goggles, rifle-sight, or, more generally, vision enhancement systems. Most of those applications requires micro-display with an active matrix size below 1". Some others, such as, for instance, ruggedized flat displays do have a need for higher active matrix size (1,5" to 15"). SAGEM DS takes advantages of this flat, high luminance and emissive technology in highly integrated systems. In any case, many requirements have to be fulfilled: ultra-low power consumption, wide viewing angle, good pixel to pixel uniformity, and satisfactory behaviour in extreme environmental conditions.... Accurate measurements have been achieved at SAGEM DS on some micro display OLEDs and will be detailed: luminance (over 2000 cd/m2 achieved), area uniformity and pixel to pixel uniformity, robustness at low and high temperature (-40°C to +60°C), lifetime. These results, which refer to military requirements, provide a valuable feedback representative of the state of the art OLED performances.

  4. Functional vision in children with perinatal brain damage.

    PubMed

    Alimović, Sonja; Jurić, Nikolina; Bošnjak, Vlatka Mejaški

    2014-09-01

    Many authors have discussed the effects of visual stimulations on visual functions, but there is no research about the effects on using vision in everyday activities (i.e. functional vision). Children with perinatal brain damage can develop cerebral visual impairment with preserved visual functions (e.g. visual acuity, contrast sensitivity) but poor functional vision. Our aim was to discuss the importance of assessing and stimulating functional vision in children with perinatal brain damage. We assessed visual functions (grating visual acuity, contrast sensitivity) and functional vision (the ability of maintaining visual attention and using vision in communication) in 99 children with perinatal brain damage and visual impairment. All children were assessed before and after the visual stimulation program. Our first assessment results showed that children with perinatal brain damage had significantly more problems in functional vision than in basic visual functions. During the visual stimulation program both variables of functional vision and contrast sensitivity improved significantly, while grating acuity improved only in 2.7% of children. We also found that improvement of visual attention significantly correlated to improvement on all other functions describing vision. Therefore, functional vision assessment, especially assessment of visual attention is indispensable in early monitoring of child with perinatal brain damage.

  5. Identifying the Computational Requirements of an Integrated Top-Down-Bottom-Up Model for Overt Visual Attention within an Active Vision System

    PubMed Central

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as ‘active vision’, to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of ‘where’ and ‘what’ information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate ‘active’ visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a ‘priority map’. PMID:23437044

  6. Data storage for managing the health enterprise and achieving business continuity.

    PubMed

    Hinegardner, Sam

    2003-01-01

    As organizations move away from a silo mentality to a vision of enterprise-level information, more healthcare IT departments are rejecting the idea of information storage as an isolated, system-by-system solution. IT executives want storage solutions that act as a strategic element of an IT infrastructure, centralizing storage management activities to effectively reduce operational overhead and costs. This article focuses on three areas of enterprise storage: tape, disk, and disaster avoidance.

  7. Research on an autonomous vision-guided helicopter

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Mesaki, Yuji; Kanade, Takeo

    1994-01-01

    Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.

  8. A robotic vision system to measure tree traits

    USDA-ARS?s Scientific Manuscript database

    The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...

  9. The Application of Architecture Frameworks to Modelling Exploration Operations Costs

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2006-01-01

    Developments in architectural frameworks and system-of-systems thinking have provided useful constructs for systems engineering. DoDAF concepts, language, and formalisms, in particular, provide a natural way of conceptualizing an operations cost model applicable to NASA's space exploration vision. Not all DoDAF products have meaning or apply to a DoDAF inspired operations cost model, but this paper describes how such DoDAF concepts as nodes, systems, and operational activities relate to the development of a model to estimate exploration operations costs. The paper discusses the specific implementation to the Mission Operations Directorate (MOD) operational functions/activities currently being developed and presents an overview of how this powerful representation can apply to robotic space missions as well.

  10. Extravehicular Activity (EVA) 101: Constellation EVA Systems

    NASA Technical Reports Server (NTRS)

    Jordan, Nicole C.

    2007-01-01

    A viewgraph presentation on Extravehicular Activity (EVA) Systems is shown. The topics include: 1) Why do we need space suits? 2) Protection From the Environment; 3) Primary Life Support System (PLSS); 4) Thermal Control; 5) Communications; 6) Helmet and Extravehicular Visor Assy; 7) Hard Upper Torso (HUT) and Arm Assy; 8) Display and Controls Module (DCM); 9) Gloves; 10) Lower Torso Assembly (LTA); 11) What Size Do You Need?; 12) Boot and Sizing Insert; 13) Boot Heel Clip and Foot Restraint; 14) Advanced and Crew Escape Suit; 15) Nominal & Off-Nominal Landing; 16) Gemini Program (mid-1960s); 17) Apollo EVA on Service Module; 18) A Bold Vision for Space Exploration, Authorized by Congress; 19) EVA System Missions; 20) Configurations; 21) Reduced Gravity Program; and 22) Other Opportunities.

  11. Machine vision for real time orbital operations

    NASA Technical Reports Server (NTRS)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  12. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  13. The Recovery of Optical Quality after Laser Vision Correction

    PubMed Central

    Jung, Hyeong-Gi

    2013-01-01

    Purpose To evaluate the optical quality after laser in situ keratomileusis (LASIK) or serial photorefractive keratectomy (PRK) using a double-pass system and to follow the recovery of optical quality after laser vision correction. Methods This study measured the visual acuity, manifest refraction and optical quality before and one day, one week, one month, and three months after laser vision correction. Optical quality parameters including the modulation transfer function, Strehl ratio and intraocular scattering were evaluated with a double-pass system. Results This study included 51 eyes that underwent LASIK and 57 that underwent PRK. The optical quality three months post-surgery did not differ significantly between these laser vision correction techniques. Furthermore, the preoperative and postoperative optical quality did not differ significantly in either group. Optical quality recovered within one week after LASIK but took between one and three months to recover after PRK. The optical quality of patients in the PRK group seemed to recover slightly more slowly than their uncorrected distance visual acuity. Conclusions Optical quality recovers to the preoperative level after laser vision correction, so laser vision correction is efficacious for correcting myopia. The double-pass system is a useful tool for clinical assessment of optical quality. PMID:23908570

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Y; Rahimi, A; Sawant, A

    Purpose: Active breathing control (ABC) has been used to reduce treatment margin due to respiratory organ motion by enforcing temporary breath-holds. However, in practice, even if the ABC device indicates constant lung volume during breath-hold, the patient may still exhibit minor chest motion. Consequently, therapists are given a false sense of security that the patient is immobilized. This study aims at quantifying such motion during ABC breath-holds by monitoring the patient chest motion using a surface photogrammetry system, VisionRT. Methods: A female patient with breast cancer was selected to evaluate chest motion during ABC breath-holds. During the entire course ofmore » treatment, the patient’s chest surface was monitored by a surface photogrammetry system, VisionRT. Specifically, a user-defined region-of-interest (ROI) on the chest surface was selected for the system to track at a rate of ∼3Hz. The surface motion was estimated by rigid image registration between the current ROI image captured and a reference image. The translational and rotational displacements computed were saved in a log file. Results: A total of 20 fractions of radiation treatment were monitored by VisionRT. After removing noisy data, we obtained chest motion of 79 breath-hold sessions. Mean chest motion in AP direction during breath-holds is 1.31mm with 0.62mm standard deviation. Of the 79 sessions, the patient exhibited motion ranging from 0–1 mm (30 sessions), 1–2 mm (37 sessions), 2–3 mm (11 sessions) and >3 mm (1 session). Conclusion: Contrary to popular assumptions, the patient is not completely still during ABC breath-hold sessions. In this particular case studied, the patient exhibited chest motion over 2mm in 14 out of 79 breath-holds. Underestimating treatment margin for radiation therapy with ABC could reduce treatment effectiveness due to geometric miss or overdose of critical organs. The senior author receives research funding from NIH, VisionRT, Varian Medical Systems and Elekta.« less

  15. Understanding of and applications for robot vision guidance at KSC

    NASA Technical Reports Server (NTRS)

    Shawaga, Lawrence M.

    1988-01-01

    The primary thrust of robotics at KSC is for the servicing of Space Shuttle remote umbilical docking functions. In order for this to occur, robots performing servicing operations must be capable of tracking a swaying Orbiter in Six Degrees of Freedom (6-DOF). Currently, in NASA KSC's Robotic Applications Development Laboratory (RADL), an ASEA IRB-90 industrial robot is being equipped with a real-time computer vision (hardware and software) system to allow it to track a simulated Orbiter interface (target) in 6-DOF. The real-time computer vision system effectively becomes the eyes for the lab robot, guiding it through a closed loop visual feedback system to move with the simulated Orbiter interface. This paper will address an understanding of this vision guidance system and how it will be applied to remote umbilical servicing at KSC. In addition, other current and future applications will be addressed.

  16. A laser-based vision system for weld quality inspection.

    PubMed

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved.

  17. A Laser-Based Vision System for Weld Quality Inspection

    PubMed Central

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308

  18. Development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification

    NASA Astrophysics Data System (ADS)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  19. The Effect of an Educational Program for Persons with Macular Degeneration: A Pilot Study

    ERIC Educational Resources Information Center

    Smith, Theresa Marie; Thomas, Kimberly; Dow, Katherine

    2009-01-01

    Macular degeneration is the leading cause of vision loss in the United States for persons aged 60 and older. Compared to individuals without disabilities, individuals with low vision demonstrate a 15% to 30% higher dependence on others to perform activities of daily living. In addition, low vision can adversely affect a person's quality of life.…

  20. Merged Vision and GPS Control of a Semi-Autonomous, Small Helicopter

    NASA Technical Reports Server (NTRS)

    Rock, Stephen M.

    1999-01-01

    This final report documents the activities performed during the research period from April 1, 1996 to September 30, 1997. It contains three papers: Carrier Phase GPS and Computer Vision for Control of an Autonomous Helicopter; A Contestant in the 1997 International Aerospace Robotics Laboratory Stanford University; and Combined CDGPS and Vision-Based Control of a Small Autonomous Helicopter.

  1. The World Water Vision: From Developing a Vision to Action

    NASA Astrophysics Data System (ADS)

    Gangopadhyay, S.; Cosgrove, W.; Rijsberman, F.; Strzepek, K.; Strzepek, K.

    2001-05-01

    The World Water Vision exercise was initiated by the World Water Commission under the auspices of the World Water Council. The goal of the World Water Vision project was to develop a widely shared vision on the actions required to achieve a common set of water-related goals and the necessary commitment to carry out these actions. The Vision should be participatory in nature, including input from both developed and developing regions, with a special focus on the needs of the poor, women, youth, children and the environment. Three overall objectives were to: (i)raise awareness of water issues among both the general population and decision-makers so as to foster the necessary political will and leadership to tackle the problems seriously and systematically; (ii) develop a vision of water management for 2025 that is shared by water sector specialists as well as international, national and regional decision-makers in government, the private sector and civil society; and (iii) provide input to a Framework for Action to be elaborated by the Global Water Partnership, with steps to go from vision to action, including recommendations to funding agencies for investment priorities. This exercise was characterized by the principles of: (i) a participatory approach with extensive consultation; (ii) Innovative thinking; (iii) central analysis to assure integration and co-ordination; and (iv) emphasis on communication with groups outside the water sector. The primary activities included, developing global water scenarios that fed into regional consultations and sectoral consultations as water for food, water for people - water supply and sanitation, and water and environment. These consultations formulated the regional and sectoral visions that were synthesized to form the World Water Vision. The findings from this exercise were reported and debated at the Second World Water Forum and the Ministerial Conference held in The Hague, The Netherlands during April 2000. This paper reports on the process of producing a "global water vision" and the primary findings, recommendations, and follow-on activities.

  2. Active imaging system performance model for target acquisition

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Teaney, Brian; Nguyen, Quang; Jacobs, Eddie L.; Halford, Carl E.; Tofsted, David H.

    2007-04-01

    The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has developed a laser-range-gated imaging system performance model for the detection, recognition, and identification of vehicle targets. The model is based on the established US Army RDECOM CERDEC NVESD sensor performance models of the human system response through an imaging system. The Java-based model, called NVLRG, accounts for the effect of active illumination, atmospheric attenuation, and turbulence effects relevant to LRG imagers, such as speckle and scintillation, and for the critical sensor and display components. This model can be used to assess the performance of recently proposed active SWIR systems through various trade studies. This paper will describe the NVLRG model in detail, discuss the validation of recent model components, present initial trade study results, and outline plans to validate and calibrate the end-to-end model with field data through human perception testing.

  3. Quantitative systems toxicology

    PubMed Central

    Bloomingdale, Peter; Housand, Conrad; Apgar, Joshua F.; Millard, Bjorn L.; Mager, Donald E.; Burke, John M.; Shah, Dhaval K.

    2017-01-01

    The overarching goal of modern drug development is to optimize therapeutic benefits while minimizing adverse effects. However, inadequate efficacy and safety concerns remain to be the major causes of drug attrition in clinical development. For the past 80 years, toxicity testing has consisted of evaluating the adverse effects of drugs in animals to predict human health risks. The U.S. Environmental Protection Agency recognized the need to develop innovative toxicity testing strategies and asked the National Research Council to develop a long-range vision and strategy for toxicity testing in the 21st century. The vision aims to reduce the use of animals and drug development costs through the integration of computational modeling and in vitro experimental methods that evaluates the perturbation of toxicity-related pathways. Towards this vision, collaborative quantitative systems pharmacology and toxicology modeling endeavors (QSP/QST) have been initiated amongst numerous organizations worldwide. In this article, we discuss how quantitative structure-activity relationship (QSAR), network-based, and pharmacokinetic/pharmacodynamic modeling approaches can be integrated into the framework of QST models. Additionally, we review the application of QST models to predict cardiotoxicity and hepatotoxicity of drugs throughout their development. Cell and organ specific QST models are likely to become an essential component of modern toxicity testing, and provides a solid foundation towards determining individualized therapeutic windows to improve patient safety. PMID:29308440

  4. Advanced Development Projects for Constellation From The Next Generation Launch Technology Program Elements

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Saiyed, Naseem H.; Swith, Marion Shayne

    2005-01-01

    When United States President George W. Bush announced the Vision for Space Exploration in January 2004, twelve propulsion and launch system projects were being pursued in the Next Generation Launch Technology (NGLT) Program. These projects underwent a review for near-term relevance to the Vision. Subsequently, five projects were chosen as advanced development projects by NASA s Exploration Systems Mission Directorate (ESMD). These five projects were Auxiliary Propulsion, Integrated Powerhead Demonstrator, Propulsion Technology and Integration, Vehicle Subsystems, and Constellation University Institutes. Recently, an NGLT effort in Vehicle Structures was identified as a gap technology that was executed via the Advanced Development Projects Office within ESMD. For all of these advanced development projects, there is an emphasis on producing specific, near-term technical deliverables related to space transportation that constitute a subset of the promised NGLT capabilities. The purpose of this paper is to provide a brief description of the relevancy review process and provide a status of the aforementioned projects. For each project, the background, objectives, significant technical accomplishments, and future plans will be discussed. In contrast to many of the current ESMD activities, these areas are providing hardware and testing to further develop relevant technologies in support of the Vision for Space Exploration.

  5. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  6. Group Active Engagement Exercises: Pursuing the Recommendations of "Vision and Change" in an Introductory Undergraduate Science Course

    ERIC Educational Resources Information Center

    Jardine, Hannah E.; Levin, Daniel M.; Quimby, B. Booth; Cooke, Todd J.

    2017-01-01

    "Vision and Change in Undergraduate Education: A Call to Action," published by the American Association for the Advancement of Science in 2011, suggested cultivating biological literacy and practicing more student-centered learning in undergraduate life sciences education. We report here on the use of Group Active Engagement (GAE)…

  7. Poor Vision, Functioning, and Depressive Symptoms: A Test of the Activity Restriction Model

    ERIC Educational Resources Information Center

    Bookwala, Jamila; Lawson, Brendan

    2011-01-01

    Purpose: This study tested the applicability of the activity restriction model of depressed affect to the context of poor vision in late life. This model hypothesizes that late-life stressors contribute to poorer mental health not only directly but also indirectly by restricting routine everyday functioning. Method: We used data from a national…

  8. A "Vision and Change" Reform of Introductory Biology Shifts Faculty Perceptions and Use of Active Learning

    ERIC Educational Resources Information Center

    Auerbach, Anna Jo; Schussler, Elisabeth

    2017-01-01

    Increasing faculty use of active-learning (AL) pedagogies in college classrooms is a persistent challenge in biology education. A large research-intensive university implemented changes to its biology majors' two-course introductory sequence as outlined by the "Vision and Change in Undergraduate Biology Education" final report. One goal…

  9. A neural correlate of working memory in the monkey primary visual cortex.

    PubMed

    Supèr, H; Spekreijse, H; Lamme, V A

    2001-07-06

    The brain frequently needs to store information for short periods. In vision, this means that the perceptual correlate of a stimulus has to be maintained temporally once the stimulus has been removed from the visual scene. However, it is not known how the visual system transfers sensory information into a memory component. Here, we identify a neural correlate of working memory in the monkey primary visual cortex (V1). We propose that this component may link sensory activity with memory activity.

  10. Vision - Vision 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Brian E.; Oppel III, Fred J.

    2017-01-25

    This package contains modules that model a visual sensor in Umbra. It is typically used to represent eyesight of characters in Umbra. This library also includes the sensor property, seeable, and an Active Denial sensor.

  11. Commercial Flight Crew Decision-Making during Low-Visibility Approach Operations Using Fused Synthetic/Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III

    2007-01-01

    NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.

  12. Appendix B: Rapid development approaches for system engineering and design

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Conventional processes often produce systems which are obsolete before they are fielded. This paper explores some of the reasons for this, and provides a vision of how we can do better. This vision is based on our explorations in improved processes and system/software engineering tools.

  13. Vision Voice: A Multimedia Exploration of Diabetes and Vision Loss in East Harlem.

    PubMed

    Ives, Brett; Nedelman, Michael; Redwood, Charysse; Ramos, Michelle A; Hughson-Andrade, Jessica; Hernandez, Evelyn; Jordan, Dioris; Horowitz, Carol R

    2015-01-01

    East Harlem, New York, is a community actively struggling with diabetes and its complications, including vision-related conditions that can affect many aspects of daily life. Vision Voice was a qualitative community-based participatory research (CBPR) study that intended to better understand the needs and experiences of people living with diabetes, other comorbid chronic illnesses, and vision loss in East Harlem. Using photovoice methodology, four participants took photographs, convened to review their photographs, and determined overarching themes for the group's collective body of work. Identified themes included effect of decreased vision function on personal independence/mobility and self-management of chronic conditions and the importance of informing community members and health care providers about these issues. The team next created a documentary film that further develops the narratives of the photovoice participants. The Vision Voice photovoice project was an effective tool to assess community needs, educate and raise awareness.

  14. Using Vision System Technologies to Enable Operational Improvements for Low Visibility Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.

    2014-01-01

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  15. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    PubMed Central

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-01-01

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318

  16. Survey of computer vision-based natural disaster warning systems

    NASA Astrophysics Data System (ADS)

    Ko, ByoungChul; Kwak, Sooyeong

    2012-07-01

    With the rapid development of information technology, natural disaster prevention is growing as a new research field dealing with surveillance systems. To forecast and prevent the damage caused by natural disasters, the development of systems to analyze natural disasters using remote sensing geographic information systems (GIS), and vision sensors has been receiving widespread interest over the last decade. This paper provides an up-to-date review of five different types of natural disasters and their corresponding warning systems using computer vision and pattern recognition techniques such as wildfire smoke and flame detection, water level detection for flood prevention, coastal zone monitoring, and landslide detection. Finally, we conclude with some thoughts about future research directions.

  17. Vision-guided micromanipulation system for biomedical application

    NASA Astrophysics Data System (ADS)

    Shim, Jae-Hong; Cho, Sung-Yong; Cha, Dong-Hyuk

    2004-10-01

    In these days, various researches for biomedical application of robots have been carried out. Particularly, robotic manipulation of the biological cells has been studied by many researchers. Usually, most of the biological cell's shape is sphere. Commercial biological manipulation systems have been utilized the 2-Dimensional images through the optical microscopes only. Moreover, manipulation of the biological cells mainly depends on the subjective viewpoint of an operator. Due to these reasons, there exist lots of problems such as slippery and destruction of the cell membrane and damage of the pipette tip etc. In order to overcome the problems, we have proposed a vision-guided biological cell manipulation system. The newly proposed manipulation system makes use of vision and graphic techniques. Through the proposed procedures, an operator can inject the biological cell scientifically and objectively. Also, the proposed manipulation system can measure the contact force occurred at injection of a biological cell. It can be transmitted a measured force to the operator by the proposed haptic device. Consequently, the proposed manipulation system could safely handle the biological cells without any damage. This paper presents the introduction of our vision-guided manipulation techniques and the concept of the contact force sensing. Through a series of experiments the proposed vision-guided manipulation system shows the possibility of application for precision manipulation of the biological cell such as DNA.

  18. Vision-based real-time position control of a semi-automated system for robot-assisted joint fracture surgery.

    PubMed

    Dagnino, Giulio; Georgilas, Ioannis; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2016-03-01

    Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model. The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09 mm (translations) and of [Formula: see text] (rotations), maximum observed errors in the order of 0.12 mm (translations) and of [Formula: see text] (rotations), and a reduction repeatability of 0.02 mm and [Formula: see text]. The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality.

  19. Obstacles encountered in the development of the low vision enhancement system.

    PubMed

    Massof, R W; Rickman, D L

    1992-01-01

    The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.

  20. A PIC microcontroller-based system for real-life interfacing of external peripherals with a mobile robot

    NASA Astrophysics Data System (ADS)

    Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan

    2010-02-01

    The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.

  1. Night vision: requirements and possible roadmap for FIR and NIR systems

    NASA Astrophysics Data System (ADS)

    Källhammer, Jan-Erik

    2006-04-01

    A night vision system must increase visibility in situations where only low beam headlights can be used today. As pedestrians and animals have the highest risk increase in night time traffic due to darkness, the ability of detecting those objects should be the main performance criteria, and the system must remain effective when facing the headlights of oncoming vehicles. Far infrared system has been shown to be superior to near infrared system in terms of pedestrian detection distance. Near infrared images were rated to have significantly higher visual clutter compared with far infrared images. Visual clutter has been shown to correlate with reduction in detection distance of pedestrians. Far infrared images are perceived as being more unusual and therefore more difficult to interpret, although the image appearance is likely related to the lower visual clutter. However, the main issue comparing the two technologies should be how well they solve the driver's problem with insufficient visibility under low beam conditions, especially of pedestrians and other vulnerable road users. With the addition of an automatic detection aid, a main issue will be whether the advantage of FIR systems will vanish given NIR systems with well performing automatic pedestrian detection functionality. The first night vision introductions did not generate the sales volumes initially expected. A renewed interest in night vision systems are however to be expected after the release of night vision systems by BMW, Mercedes and Honda, the latter with automatic pedestrian detection.

  2. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Emilia, Giulio, E-mail: giulio.demilia@univaq.it; Di Gasbarro, David, E-mail: david.digasbarro@graduate.univaq.it; Gaspari, Antonella, E-mail: antonella.gaspari@graduate.univaq.it

    2016-06-28

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behaviormore » if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.« less

  3. Two-dimensional (2D) displacement measurement of moving objects using a new MEMS binocular vision system

    NASA Astrophysics Data System (ADS)

    Di, Si; Lin, Hui; Du, Ruxu

    2011-05-01

    Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.

  4. Functional Outcomes of the Low Vision Depression Prevention Trial in Age-Related Macular Degeneration.

    PubMed

    Deemer, Ashley D; Massof, Robert W; Rovner, Barry W; Casten, Robin J; Piersol, Catherine V

    2017-03-01

    To compare the efficacy of behavioral activation (BA) plus low vision rehabilitation with an occupational therapist (OT-LVR) with supportive therapy (ST) on visual function in patients with age-related macular degeneration (AMD). Single-masked, attention-controlled, randomized clinical trial with AMD patients with subsyndromal depressive symptoms (n = 188). All subjects had two outpatient low vision rehabilitation optometry visits, then were randomized to in-home BA + OT-LVR or ST. Behavioral activation is a structured behavioral treatment aiming to increase adaptive behaviors and achieve valued goals. Supportive therapy is a nondirective, psychological treatment that provides emotional support and controls for attention. Functional vision was assessed with the activity inventory (AI) in which participants rate the difficulty level of goals and corresponding tasks. Participants were assessed at baseline and 4 months. Improvements in functional vision measures were seen in both the BA + OT-LVR and ST groups at the goal level (d = 0.71; d = 0.56 respectively). At the task level, BA + OT-LVR patients showed more improvement in reading, inside-the-home tasks and outside-the-home tasks, when compared to ST patients. The greatest effects were seen in the BA + OT-LVR group in subjects with a visual acuity ≥20/70 (d = 0.360 reading; d = 0.500 inside the home; d = 0.468 outside the home). Based on the trends of the AI data, we suggest that BA + OT-LVR services, provided by an OT in the patient's home following conventional low vision optometry services, are more effective than conventional optometric low vision services alone for those with mild visual impairment. (ClinicalTrials.gov number, NCT00769015.).

  5. The Ontology of Vision. The Invisible, Consciousness of Living Matter

    PubMed Central

    Fiorio, Giorgia

    2016-01-01

    If I close my eyes, the absence of light activates the peripheral cells devoted to the perception of darkness. The awareness of “seeing oneself seeing” is in its essence a thought, one that is internal to the vision and previous to any object of sight. To this amphibious faculty, the “diaphanous color of darkness,” Aristotle assigns the principle of knowledge. “Vision is a whole perceptual system, not a channel of sense.” Functions of vision are interwoven with the texture of human interaction within a terrestrial environment that is in turn contained into the cosmic order. A transitive host within the resonance of an inner-outer environment, the human being is the contact-term between two orders of scale, both bigger and smaller than the individual unity. In the perceptual integrative system of human vision, the convergence-divergence of the corporeal presence and the diffraction of its own appearance is the margin. The sensation of being no longer coincides with the breath of life, it does not seems “real” without the trace of some visible evidence and its simultaneous “sharing”. Without a shadow, without an imprint, the numeric copia of the physical presence inhabits the transient memory of our electronic prostheses. A rudimentary “visuality” replaces tangible experience dissipating its meaning and the awareness of being alive. Transversal to the civilizations of the ancient world, through different orders of function and status, the anthropomorphic “figuration” of archaic sculpture addressees the margin between Being and Non-Being. Statuary human archetypes are not meant to be visible, but to exist as vehicles of transcendence to outlive the definition of human space-time. The awareness of individual finiteness seals the compulsion to “give body” to an invisible apparition shaping the figuration of an ontogenetic expression of human consciousness. Subject and object, the term “humanum” fathoms the relationship between matter and its living dimension, “this de facto vision and the ‘there is’ which it contains.” The project reconsiders the dialectic between the terms vision–presence in the contemporary perception of archaic human statuary according to the transcendent meaning of its immaterial legacy. PMID:27014106

  6. Automated Grading of Rough Hardwood Lumber

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Any automatic hardwood grading system must have two components. The first of these is a computer vision system for locating and identifying defects on rough lumber. The second is a system for automatically grading boards based on the output of the computer vision system. This paper presents research results aimed at developing the first of these components. The...

  7. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Treesearch

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  8. Implementing the President's Vision: JPL and NASA's Exploration Systems Mission Directorate

    NASA Technical Reports Server (NTRS)

    Sander, Michael J.

    2006-01-01

    As part of the NASA team the Jet Propulsion Laboratory is involved in the Exploration Systems Mission Directorate (ESMD) work to implement the President's Vision for Space exploration. In this slide presentation the roles that are assigned to the various NASA centers to implement the vision are reviewed. The plan for JPL is to use the Constellation program to advance the combination of science an Constellation program objectives. JPL's current participation is to contribute systems engineering support, Command, Control, Computing and Information (C3I) architecture, Crew Exploration Vehicle, (CEV) Thermal Protection System (TPS) project support/CEV landing assist support, Ground support systems support at JSC and KSC, Exploration Communication and Navigation System (ECANS), Flight prototypes for cabin atmosphere instruments

  9. Digital tripwire: a small automated human detection system

    NASA Astrophysics Data System (ADS)

    Fischer, Amber D.; Redd, Emmett; Younger, A. Steven

    2009-05-01

    A low cost, lightweight, easily deployable imaging sensor that can dependably discriminate threats from other activities within its field of view and, only then, alert the distant duty officer by transmitting a visual confirmation of the threat would provide a valuable asset to modern defense. At present, current solutions suffer from a multitude of deficiencies - size, cost, power endurance, but most notably, an inability to assess an image and conclude that it contains a threat. The human attention span cannot maintain critical surveillance over banks of displays constantly conveying such images from the field. DigitalTripwire is a small, self-contained, automated human-detection system capable of running for 1-5 days on two AA batteries. To achieve such long endurance, the DigitalTripwire system utilizes an FPGA designed with sleep functionality. The system uses robust vision algorithms, such as a partially unsupervised innovative backgroundmodeling algorithm, which employ several data reduction strategies to operate in real-time, and achieve high detection rates. When it detects human activity, either mounted or dismounted, it sends an alert including images to notify the command center. In this paper, we describe the hardware and software design of the DigitalTripwire system. In addition, we provide detection and false alarm rates across several challenging data sets demonstrating the performance of the vision algorithms in autonomously analyzing the video stream and classifying moving objects into four primary categories - dismounted human, vehicle, non-human, or unknown. Performance results across several challenging data sets are provided.

  10. Integrating Child Health Information Systems

    PubMed Central

    Hinman, Alan R.; Eichwald, John; Linzer, Deborah; Saarlas, Kristin N.

    2005-01-01

    The Health Resources and Services Administration and All Kids Count (a national technical assistance center fostering development of integrated child health information systems) have been working together to foster development of integrated child health information systems. Activities have included: identification of key elements for successful integration of systems; development of principles and core functions for the systems; a survey of state and local integration efforts; and a conference to develop a common vision for child health information systems to meet medical care and public health needs. We provide 1 state (Utah) as an example that is well on the way to development of integrated child health information systems. PMID:16195524

  11. Advances in real-time millimeter-wave imaging radiometers for avionic synthetic vision

    NASA Astrophysics Data System (ADS)

    Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.; Galliano, Joseph A., Jr.

    1995-06-01

    Millimeter-wave imaging has advantages over conventional visible or infrared imaging for many applications because millimeter-wave signals can travel through fog, snow, dust, and clouds with much less attenuation than infrared or visible light waves. Additionally, passive imaging systems avoid many problems associated with active radar imaging systems, such as radar clutter, glint, and multi-path return. ThermoTrex Corporation previously reported on its development of a passive imaging radiometer that uses an array of frequency-scanned antennas coupled to a multichannel acousto-optic spectrum analyzer (Bragg-cell) to form visible images of a scene through the acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output from the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. An application of this system is its incorporation as part of an enhanced vision system to provide pilots with a synthetic view of a runway in fog and during other adverse weather conditions. Ongoing improvements to a 94 GHz imaging system and examples of recent images taken with this system will be presented. Additionally, the development of dielectric antennas and an electro- optic-based processor for improved system performance, and the development of an `ultra- compact' 220 GHz imaging system will be discussed.

  12. Electronic bracelet and vision-enabled waist-belt for mobility of visually impaired people.

    PubMed

    Bhatlawande, Shripad; Sunkari, Amar; Mahadevappa, Manjunatha; Mukhopadhyay, Jayanta; Biswas, Mukul; Das, Debabrata; Gupta, Somedeb

    2014-01-01

    A wearable assistive system is proposed to improve mobility of visually impaired people (subjects). This system has been implemented in the shape of a bracelet and waist-belt in order to increase its wearable convenience and cosmetic acceptability. A camera and an ultrasonic sensor are attached to a customized waist-belt and bracelet, respectively. The proposed modular system will act as a complementary aid along with a white cane. Its vision-enabled waist-belt module detects the path and distribution of obstacles on the path. This module conveys the required information to a subject via a mono earphone by activating relevant spoken messages. The electronic bracelet module assists the subject to verify this information and to perceive distance of obstacles along with their locations. The proposed complementary system provides an improved understanding of the surrounding environment with less cognitive and perceptual efforts as compared to a white cane alone. This system was subjected to clinical evaluations with 15 totally blind subjects. Results of usability experiments demonstrated effectiveness of the system as a mobility aid. Amongst the participated subjects, 93.33% expressed satisfaction with the information content of this system, 86.66% subjects comprehended its operational convenience, and 80% appreciated the comfort of the system.

  13. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    PubMed

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  14. Visual Acuity’s Association with Levels of Leisure-Time Physical Activity Among Community-Dwelling Older Adults

    PubMed Central

    Swanson, Mark W; Bodner, Eric; Sawyer, Patricia; Allman, Richard

    2013-01-01

    Little is known about the affect of reduced vision on physical activity in older adults. This study evaluates the association of visual acuity level, self-reported vision and ocular disease conditions with leisure-time physical activity and calculated caloric expenditure. A cross sectional study of 911 subjects 65 yr and older from the University of Alabama at Birmingham Study of Aging (SOA) cohort was conducted evaluating the association of vision-related variables to weekly kilocalorie expenditure calculated from the 17-item Leisure Time Physical Activity Questionnaire. Ordinal logistic regression was used to evaluate possible associations controlling for potential confounders. In multivariate analyses, each lower step in visual acuity category below 20/50 was significantly associated with reduced odds of having a higher level of physical activity OR 0.81, 95% CI 0.67, 0.97. Reduced visual acuity appears to be independently associated with lower levels of physical activity among community-dwelling adults. PMID:21945888

  15. Neural correlates of virtual route recognition in congenital blindness.

    PubMed

    Kupers, Ron; Chebat, Daniel R; Madsen, Kristoffer H; Paulson, Olaf B; Ptito, Maurice

    2010-07-13

    Despite the importance of vision for spatial navigation, blind subjects retain the ability to represent spatial information and to move independently in space to localize and reach targets. However, the neural correlates of navigation in subjects lacking vision remain elusive. We therefore used functional MRI (fMRI) to explore the cortical network underlying successful navigation in blind subjects. We first trained congenitally blind and blindfolded sighted control subjects to perform a virtual navigation task with the tongue display unit (TDU), a tactile-to-vision sensory substitution device that translates a visual image into electrotactile stimulation applied to the tongue. After training, participants repeated the navigation task during fMRI. Although both groups successfully learned to use the TDU in the virtual navigation task, the brain activation patterns showed substantial differences. Blind but not blindfolded sighted control subjects activated the parahippocampus and visual cortex during navigation, areas that are recruited during topographical learning and spatial representation in sighted subjects. When the navigation task was performed under full vision in a second group of sighted participants, the activation pattern strongly resembled the one obtained in the blind when using the TDU. This suggests that in the absence of vision, cross-modal plasticity permits the recruitment of the same cortical network used for spatial navigation tasks in sighted subjects.

  16. On the functional relevance of frontal cortex for passive and voluntarily controlled bistable vision.

    PubMed

    de Graaf, Tom A; de Jong, Maartje C; Goebel, Rainer; van Ee, Raymond; Sack, Alexander T

    2011-10-01

    In bistable vision, one constant ambiguous stimulus leads to 2 alternating conscious percepts. This perceptual switching occurs spontaneously but can also be influenced through voluntary control. Neuroimaging studies have reported that frontal regions are activated during spontaneous perceptual switches, leading some researchers to suggest that frontal regions causally induce perceptual switches. But the opposite also seems possible: frontal activations may themselves be caused by spontaneous switches. Classically implicated in attentional processes, these same regions are also candidates for the origins of voluntary control over bistable vision. Here too, it remains unknown whether frontal cortex is actually functionally relevant. It is even possible that spontaneous perceptual switches and voluntarily induced switches are mediated by the same top-down mechanisms. To directly address these issues, we here induced "virtual lesions," with transcranial magnetic stimulation, in frontal, parietal, and 2 lower level visual cortices using an established ambiguous structure-from-motion stimulus. We found that dorsolateral prefrontal cortex was causally relevant for voluntary control over perceptual switches. In contrast, we failed to find any evidence for an active role of frontal cortex in passive bistable vision. Thus, it seems the same pathway used for willed top-down modulation of bistable vision is not used during passive bistable viewing.

  17. Night vision imaging systems design, integration, and verification in military fighter aircraft

    NASA Astrophysics Data System (ADS)

    Sabatini, Roberto; Richardson, Mark A.; Cantiello, Maurizio; Toscano, Mario; Fiorini, Pietro; Jia, Huamin; Zammit-Mangion, David

    2012-04-01

    This paper describes the developmental and testing activities conducted by the Italian Air Force Official Test Centre (RSV) in collaboration with Alenia Aerospace, Litton Precision Products and Cranfiled University, in order to confer the Night Vision Imaging Systems (NVIS) capability to the Italian TORNADO IDS (Interdiction and Strike) and ECR (Electronic Combat and Reconnaissance) aircraft. The activities consisted of various Design, Development, Test and Evaluation (DDT&E) activities, including Night Vision Goggles (NVG) integration, cockpit instruments and external lighting modifications, as well as various ground test sessions and a total of eighteen flight test sorties. RSV and Litton Precision Products were responsible of coordinating and conducting the installation activities of the internal and external lights. Particularly, an iterative process was established, allowing an in-site rapid correction of the major deficiencies encountered during the ground and flight test sessions. Both single-ship (day/night) and formation (night) flights were performed, shared between the Test Crews involved in the activities, allowing for a redundant examination of the various test items by all participants. An innovative test matrix was developed and implemented by RSV for assessing the operational suitability and effectiveness of the various modifications implemented. Also important was definition of test criteria for Pilot and Weapon Systems Officer (WSO) workload assessment during the accomplishment of various operational tasks during NVG missions. Furthermore, the specific technical and operational elements required for evaluating the modified helmets were identified, allowing an exhaustive comparative evaluation of the two proposed solutions (i.e., HGU-55P and HGU-55G modified helmets). The results of the activities were very satisfactory. The initial compatibility problems encountered were progressively mitigated by incorporating modifications both in the front and rear cockpits at the various stages of the test campaign. This process allowed a considerable enhancement of the TORNADO NVIS configuration, giving a good medium-high level NVG operational capability to the aircraft. Further developments also include the design, integration and test of internal/external lighting for the Italian TORNADO "Mid Life Update" (MLU) and other programs, such as the AM-X aircraft internal/external lights modification/testing and the activities addressing low-altitude NVG operations with fast jets (e.g., TORNADO, AM-X, MB-339CD), a major issue being the safe ejection of aircrew with NVG and NVG modified helmets. Two options have been identified for solving this problem: namely the modification of the current Gentex HGU-55 helmets and the design of a new helmet incorporating a reliable NVG connection/disconnection device (i.e., a mechanical system fully integrated in the helmet frame), with embedded automatic disconnection capability in case of ejection.

  18. Disturbed temporal dynamics of brain synchronization in vision loss.

    PubMed

    Bola, Michał; Gall, Carolin; Sabel, Bernhard A

    2015-06-01

    Damage along the visual pathway prevents bottom-up visual input from reaching further processing stages and consequently leads to loss of vision. But perception is not a simple bottom-up process - rather it emerges from activity of widespread cortical networks which coordinate visual processing in space and time. Here we set out to study how vision loss affects activity of brain visual networks and how networks' activity is related to perception. Specifically, we focused on studying temporal patterns of brain activity. To this end, resting-state eyes-closed EEG was recorded from partially blind patients suffering from chronic retina and/or optic-nerve damage (n = 19) and healthy controls (n = 13). Amplitude (power) of oscillatory activity and phase locking value (PLV) were used as measures of local and distant synchronization, respectively. Synchronization time series were created for the low- (7-9 Hz) and high-alpha band (11-13 Hz) and analyzed with three measures of temporal patterns: (i) length of synchronized-/desynchronized-periods, (ii) Higuchi Fractal Dimension (HFD), and (iii) Detrended Fluctuation Analysis (DFA). We revealed that patients exhibit less complex, more random and noise-like temporal dynamics of high-alpha band activity. More random temporal patterns were associated with worse performance in static (r = -.54, p = .017) and kinetic perimetry (r = .47, p = .041). We conclude that disturbed temporal patterns of neural synchronization in vision loss patients indicate disrupted communication within brain visual networks caused by prolonged deafferentation. We propose that because the state of brain networks is essential for normal perception, impaired brain synchronization in patients with vision loss might aggravate the functional consequences of reduced visual input. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Psychological distress and visual functioning in relation to vision-related disability in older individuals with cataracts.

    PubMed

    Walker, J G; Anstey, K J; Lord, S R

    2006-05-01

    To determine whether demographic, health status and psychological functioning measures, in addition to impaired visual acuity, are related to vision-related disability. Participants were 105 individuals (mean age=73.7 years) with cataracts requiring surgery and corrected visual acuity in the better eye of 6/24 to 6/36 were recruited from waiting lists at three public out-patient ophthalmology clinics. Visual disability was measured with the Visual Functioning-14 survey. Visual acuity was assessed using better and worse eye logMAR scores and the Melbourne Edge Test (MET) for edge contrast sensitivity. Data relating to demographic information, depression, anxiety and stress, health care and medication use and numbers of co-morbid conditions were obtained. Principal component analysis revealed four meaningful factors that accounted for 75% of the variance in visual disability: recreational activities, reading and fine work, activities of daily living and driving behaviour. Multiple regression analyses determined that visual acuity variables were the only significant predictors of overall vision-related functioning and difficulties with reading and fine work. For the remaining visual disability domains, non-visual factors were also significant predictors. Difficulties with recreational activities were predicted by stress, as well as worse eye visual acuity, and difficulties with activities of daily living were associated with self-reported health status, age and depression as well as MET contrast scores. Driving behaviour was associated with sex (with fewer women driving), depression, anxiety and stress scores, and MET contrast scores. Vision-related disability is common in older individuals with cataracts. In addition to visual acuity, demographic, psychological and health status factors influence the severity of vision-related disability, affecting recreational activities, activities of daily living and driving.

  20. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates

    NASA Astrophysics Data System (ADS)

    Barberis, Lucas; Peruani, Fernando

    2016-12-01

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  1. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates.

    PubMed

    Barberis, Lucas; Peruani, Fernando

    2016-12-09

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit-due to the VC that breaks Newton's third law-various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving-locally polar-files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  2. The Tactile Vision Substitution System: Applications in Education and Employment

    ERIC Educational Resources Information Center

    Scadden, Lawrence A.

    1974-01-01

    The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)

  3. Synthetic Vision Workshop 2

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J. (Compiler)

    1999-01-01

    The second NASA sponsored Workshop on Synthetic/Enhanced Vision (S/EV) Display Systems was conducted January 27-29, 1998 at the NASA Langley Research Center. The purpose of this workshop was to provide a forum for interested parties to discuss topics in the Synthetic Vision (SV) element of the NASA Aviation Safety Program and to encourage those interested parties to participate in the development, prototyping, and implementation of S/EV systems that enhance aviation safety. The SV element addresses the potential safety benefits of synthetic/enhanced vision display systems for low-end general aviation aircraft, high-end general aviation aircraft (business jets), and commercial transports. Attendance at this workshop consisted of about 112 persons including representatives from industry, the FAA, and other government organizations (NOAA, NIMA, etc.). The workshop provided opportunities for interested individuals to give presentations on the state of the art in potentially applicable systems, as well as to discuss areas of research that might be considered for inclusion within the Synthetic Vision Element program to contribute to the reduction of the fatal aircraft accident rate. Panel discussions on topical areas such as databases, displays, certification issues, and sensors were conducted, with time allowed for audience participation.

  4. An assembly system based on industrial robot with binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  5. Multi-Purpose Avionic Architecture for Vision Based Navigation Systems for EDL and Surface Mobility Scenarios

    NASA Astrophysics Data System (ADS)

    Tramutola, A.; Paltro, D.; Cabalo Perucha, M. P.; Paar, G.; Steiner, J.; Barrio, A. M.

    2015-09-01

    Vision Based Navigation (VBNAV) has been identified as a valid technology to support space exploration because it can improve autonomy and safety of space missions. Several mission scenarios can benefit from the VBNAV: Rendezvous & Docking, Fly-Bys, Interplanetary cruise, Entry Descent and Landing (EDL) and Planetary Surface exploration. For some of them VBNAV can improve the accuracy in state estimation as additional relative navigation sensor or as absolute navigation sensor. For some others, like surface mobility and terrain exploration for path identification and planning, VBNAV is mandatory. This paper presents the general avionic architecture of a Vision Based System as defined in the frame of the ESA R&T study “Multi-purpose Vision-based Navigation System Engineering Model - part 1 (VisNav-EM-1)” with special focus on the surface mobility application.

  6. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  7. Vision and night driving abilities of elderly drivers.

    PubMed

    Gruber, Nicole; Mosimann, Urs P; Müri, René M; Nef, Tobias

    2013-01-01

    In this article, we review the impact of vision on older people's night driving abilities. Driving is the preferred and primary mode of transport for older people. It is a complex activity where intact vision is seminal for road safety. Night driving requires mesopic rather than scotopic vision, because there is always some light available when driving at night. Scotopic refers to night vision, photopic refers to vision under well-lit conditions, and mesopic vision is a combination of photopic and scotopic vision in low but not quite dark lighting situations. With increasing age, mesopic vision decreases and glare sensitivity increases, even in the absence of ocular diseases. Because of the increasing number of elderly drivers, more drivers are affected by night vision difficulties. Vision tests, which accurately predict night driving ability, are therefore of great interest. We reviewed existing literature on age-related influences on vision and vision tests that correlate or predict night driving ability. We identified several studies that investigated the relationship between vision tests and night driving. These studies found correlations between impaired mesopic vision or increased glare sensitivity and impaired night driving, but no correlation was found among other tests; for example, useful field of view or visual field. The correlation between photopic visual acuity, the most commonly used test when assessing elderly drivers, and night driving ability has not yet been fully clarified. Photopic visual acuity alone is not a good predictor of night driving ability. Mesopic visual acuity and glare sensitivity seem relevant for night driving. Due to the small number of studies evaluating predictors for night driving ability, further research is needed.

  8. Neural network expert system for X-ray analysis of welded joints

    NASA Astrophysics Data System (ADS)

    Kozlov, V. V.; Lapik, N. V.; Popova, N. V.

    2018-03-01

    The use of intelligent technologies for the automated analysis of product quality is one of the main trends in modern machine building. At the same time, rapid development in various spheres of human activity is experienced by methods associated with the use of artificial neural networks, as the basis for building automated intelligent diagnostic systems. Technologies of machine vision allow one to effectively detect the presence of certain regularities in the analyzed designation, including defects of welded joints according to radiography data.

  9. Institutional Vision at Proprietary Schools: Advising for Profit

    ERIC Educational Resources Information Center

    Abelman, Robert; Dalessandro, Amy; Janstova, Patricie; Snyder-Suhy, Sharon

    2007-01-01

    A college or university's general approach to students and student support services, as reflected in its institutional vision, can serve to advocate the adoption of one type of advising structure, approach, and delivery system over another. A content analysis of a nationwide sample of institutional vision statements from NACADA-membership colleges…

  10. Vision Integrating Strategies in Ophthalmology and Neurochemistry (VISION)

    DTIC Science & Technology

    2015-05-01

    ischemia. Neuroprotection with non- feminizing estrogens: Estrogens are well known as female sex hormones, but recent evidence also supports their...neuroprotective activities, which often can be separated from their feminizing activities. Non- feminizing estrogens can protect cultured retinal... feminizing estrogen analogues in retinal neurons. 2011 Society for Neuroscience Abstract 895.01. Mueller B, Ma H-Y, Yorio T. Inhibition of NMDA

  11. Mental stress as consequence and cause of vision loss: the dawn of psychosomatic ophthalmology for preventive and personalized medicine.

    PubMed

    Sabel, Bernhard A; Wang, Jiaqi; Cárdenas-Morales, Lizbeth; Faiq, Muneeb; Heim, Christine

    2018-06-01

    The loss of vision after damage to the retina, optic nerve, or brain has often grave consequences in everyday life such as problems with recognizing faces, reading, or mobility. Because vision loss is considered to be irreversible and often progressive, patients experience continuous mental stress due to worries, anxiety, or fear with secondary consequences such as depression and social isolation. While prolonged mental stress is clearly a consequence of vision loss, it may also aggravate the situation. In fact, continuous stress and elevated cortisol levels negatively impact the eye and brain due to autonomous nervous system (sympathetic) imbalance and vascular dysregulation; hence stress may also be one of the major causes of visual system diseases such as glaucoma and optic neuropathy. Although stress is a known risk factor, its causal role in the development or progression of certain visual system disorders is not widely appreciated. This review of the literature discusses the relationship of stress and ophthalmological diseases. We conclude that stress is both consequence and cause of vision loss. This creates a vicious cycle of a downward spiral, in which initial vision loss creates stress which further accelerates vision loss, creating even more stress and so forth. This new psychosomatic perspective has several implications for clinical practice. Firstly, stress reduction and relaxation techniques (e.g., meditation, autogenic training, stress management training, and psychotherapy to learn to cope) should be recommended not only as complementary to traditional treatments of vision loss but possibly as preventive means to reduce progression of vision loss. Secondly, doctors should try their best to inculcate positivity and optimism in their patients while giving them the information the patients are entitled to, especially regarding the important value of stress reduction. In this way, the vicious cycle could be interrupted. More clinical studies are now needed to confirm the causal role of stress in different low vision diseases to evaluate the efficacy of different anti-stress therapies for preventing progression and improving vision recovery and restoration in randomized trials as a foundation of psychosomatic ophthalmology.

  12. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    DTIC Science & Technology

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey

  13. A vision-based approach for the direct measurement of displacements in vibrating systems

    NASA Astrophysics Data System (ADS)

    Mazen Wahbeh, A.; Caffrey, John P.; Masri, Sami F.

    2003-10-01

    This paper reports the results of an analytical and experimental study to develop, calibrate, implement and evaluate the feasibility of a novel vision-based approach for obtaining direct measurements of the absolute displacement time history at selectable locations of dispersed civil infrastructure systems such as long-span bridges. The measurements were obtained using a highly accurate camera in conjunction with a laser tracking reference. Calibration of the vision system was conducted in the lab to establish performance envelopes and data processing algorithms to extract the needed information from the captured vision scene. Subsequently, the monitoring apparatus was installed in the vicinity of the Vincent Thomas Bridge in the metropolitan Los Angeles region. This allowed the deployment of the instrumentation system under realistic conditions so as to determine field implementation issues that need to be addressed. It is shown that the proposed approach has the potential of leading to an economical and robust system for obtaining direct, simultaneous, measurements at several locations of the displacement time histories of realistic infrastructure systems undergoing complex three-dimensional deformations.

  14. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  15. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision

    NASA Astrophysics Data System (ADS)

    Li, Peng; Chong, Wenyan; Ma, Yongjun

    2017-10-01

    In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.

  16. U.S. Geological Survey core science systems strategy: characterizing, synthesizing, and understanding the critical zone through a modular science framework

    USGS Publications Warehouse

    Bristol, R. Sky; Euliss, Ned H.; Booth, Nathaniel L.; Burkardt, Nina; Diffendorfer, Jay E.; Gesch, Dean B.; McCallum, Brian E.; Miller, David M.; Morman, Suzette A.; Poore, Barbara S.; Signell, Richard P.; Viger, Roland J.

    2013-01-01

    Core Science Systems is a new mission of the U.S. Geological Survey (USGS) that resulted from the 2007 Science Strategy, "Facing Tomorrow's Challenges: U.S. Geological Survey Science in the Decade 2007-2017." This report describes the Core Science Systems vision and outlines a strategy to facilitate integrated characterization and understanding of the complex Earth system. The vision and suggested actions are bold and far-reaching, describing a conceptual model and framework to enhance the ability of the USGS to bring its core strengths to bear on pressing societal problems through data integration and scientific synthesis across the breadth of science. The context of this report is inspired by a direction set forth in the 2007 Science Strategy. Specifically, ecosystem-based approaches provide the underpinnings for essentially all science themes that define the USGS. Every point on Earth falls within a specific ecosystem where data, other information assets, and the expertise of USGS and its many partners can be employed to quantitatively understand how that ecosystem functions and how it responds to natural and anthropogenic disturbances. Every benefit society obtains from the planet-food, water, raw materials to build infrastructure, homes and automobiles, fuel to heat homes and cities, and many others, are derived from or affect ecosystems. The vision for Core Science Systems builds on core strengths of the USGS in characterizing and understanding complex Earth and biological systems through research, modeling, mapping, and the production of high quality data on the Nation's natural resource infrastructure. Together, these research activities provide a foundation for ecosystem-based approaches through geologic mapping, topographic mapping, and biodiversity mapping. The vision describes a framework founded on these core mapping strengths that makes it easier for USGS scientists to discover critical information, share and publish results, and identify potential collaborations that transcend all USGS missions. The framework is designed to improve the efficiency of scientific work within USGS by establishing a means to preserve and recall data for future applications, organizing existing scientific knowledge and data to facilitate new use of older information, and establishing a future workflow that naturally integrates new data, applications, and other science products to make interdisciplinary research easier and more efficient. Given the increasing need for integrated data and interdisciplinary approaches to solve modern problems, leadership by the Core Science Systems mission will facilitate problem solving by all USGS missions in ways not formerly possible. The report lays out a strategy to achieve this vision through three goals with accompanying objectives and actions. The first goal builds on and enhances the strengths of the Core Science Systems mission in characterizing and understanding the Earth system from the geologic framework to the topographic characteristics of the land surface and biodiversity across the Nation. The second goal enhances and develops new strengths in computer and information science to make it easier for USGS scientists to discover data and models, share and publish results, and discover connections between scientific information and knowledge. The third goal brings additional focus to research and development methods to address complex issues affecting society that require integration of knowledge and new methods for synthesizing scientific information. Collectively, the report lays out a strategy to create a seamless connection between all USGS activities to accelerate and make USGS science more efficient by fully integrating disciplinary expertise within a new and evolving science paradigm for a changing world in the 21st century.

  17. Science strategy for Core Science Systems in the U.S. Geological Survey, 2013-2023

    USGS Publications Warehouse

    Bristol, R. Sky; Euliss, Ned H.; Booth, Nathaniel L.; Burkardt, Nina; Diffendorfer, Jay E.; Gesch, Dean B.; McCallum, Brian E.; Miller, David M.; Morman, Suzette A.; Poore, Barbara S.; Signell, Richard P.; Viger, Roland J.

    2012-01-01

    Core Science Systems is a new mission of the U.S. Geological Survey (USGS) that grew out of the 2007 Science Strategy, “Facing Tomorrow’s Challenges: U.S. Geological Survey Science in the Decade 2007–2017.” This report describes the vision for this USGS mission and outlines a strategy for Core Science Systems to facilitate integrated characterization and understanding of the complex earth system. The vision and suggested actions are bold and far-reaching, describing a conceptual model and framework to enhance the ability of USGS to bring its core strengths to bear on pressing societal problems through data integration and scientific synthesis across the breadth of science.The context of this report is inspired by a direction set forth in the 2007 Science Strategy. Specifically, ecosystem-based approaches provide the underpinnings for essentially all science themes that define the USGS. Every point on earth falls within a specific ecosystem where data, other information assets, and the expertise of USGS and its many partners can be employed to quantitatively understand how that ecosystem functions and how it responds to natural and anthropogenic disturbances. Every benefit society obtains from the planet—food, water, raw materials to build infrastructure, homes and automobiles, fuel to heat homes and cities, and many others, are derived from or effect ecosystems.The vision for Core Science Systems builds on core strengths of the USGS in characterizing and understanding complex earth and biological systems through research, modeling, mapping, and the production of high quality data on the nation’s natural resource infrastructure. Together, these research activities provide a foundation for ecosystem-based approaches through geologic mapping, topographic mapping, and biodiversity mapping. The vision describes a framework founded on these core mapping strengths that makes it easier for USGS scientists to discover critical information, share and publish results, and identify potential collaborations that transcend all USGS missions. The framework is designed to improve the efficiency of scientific work within USGS by establishing a means to preserve and recall data for future applications, organizing existing scientific knowledge and data to facilitate new use of older information, and establishing a future workflow that naturally integrates new data, applications, and other science products to make it easier and more efficient to conduct interdisciplinary research over time. Given the increasing need for integrated data and interdisciplinary approaches to solve modern problems, leadership by the Core Science Systems mission will facilitate problem solving by all USGS missions in ways not formerly possible.The report lays out a strategy to achieve this vision through three goals with accompanying objectives and actions. The first goal builds on and enhances the strengths of the Core Science Systems mission in characterizing and understanding the earth system from the geologic framework to the topographic characteristics of the land surface and biodiversity across the nation. The second goal enhances and develops new strengths in computer and information science to make it easier for USGS scientists to discover data and models, share and publish results, and discover connections between scientific information and knowledge. The third goal brings additional focus to research and development methods to address complex issues affecting society that require integration of knowledge and new methods for synthesizing scientific information. Collectively, the report lays out a strategy to create a seamless connection between all USGS activities to accelerate and make USGS science more efficient by fully integrating disciplinary expertise within a new and evolving science paradigm for a changing world in the 21st century.

  18. Line width determination using a biomimetic fly eye vision system.

    PubMed

    Benson, John B; Wright, Cameron H G; Barrett, Steven F

    2007-01-01

    Developing a new vision system based on the vision of the common house fly, Musca domestica, has created many interesting design challenges. One of those problems is line width determination, which is the topic of this paper. It has been discovered that line width can be determined with a single sensor as long as either the sensor, or the object in question, has a constant, known velocity. This is an important first step for determining the width of any arbitrary object, with unknown velocity.

  19. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  20. Mobile camera-space manipulation

    NASA Technical Reports Server (NTRS)

    Seelinger, Michael J. (Inventor); Yoder, John-David S. (Inventor); Skaar, Steven B. (Inventor)

    2001-01-01

    The invention is a method of using computer vision to control systems consisting of a combination of holonomic and nonholonomic degrees of freedom such as a wheeled rover equipped with a robotic arm, a forklift, and earth-moving equipment such as a backhoe or a front-loader. Using vision sensors mounted on the mobile system and the manipulator, the system establishes a relationship between the internal joint configuration of the holonomic degrees of freedom of the manipulator and the appearance of features on the manipulator in the reference frames of the vision sensors. Then, the system, perhaps with the assistance of an operator, identifies the locations of the target object in the reference frames of the vision sensors. Using this target information, along with the relationship described above, the system determines a suitable trajectory for the nonholonomic degrees of freedom of the base to follow towards the target object. The system also determines a suitable pose or series of poses for the holonomic degrees of freedom of the manipulator. With additional visual samples, the system automatically updates the trajectory and final pose of the manipulator so as to allow for greater precision in the overall final position of the system.

  1. Light Vision Color

    NASA Astrophysics Data System (ADS)

    Valberg, Arne

    2005-04-01

    Light Vision Color takes a well-balanced, interdisciplinary approach to our most important sensory system. The book successfully combines basics in vision sciences with recent developments from different areas such as neuroscience, biophysics, sensory psychology and philosophy. Originally published in 1998 this edition has been extensively revised and updated to include new chapters on clinical problems and eye diseases, low vision rehabilitation and the basic molecular biology and genetics of colour vision. Takes a broad interdisciplinary approach combining basics in vision sciences with the most recent developments in the area Includes an extensive list of technical terms and explanations to encourage student understanding Successfully brings together the most important areas of the subject in to one volume

  2. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip

    2015-07-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological datamore » can be incorporated by means of data fusion of the two sensors' output data. (authors)« less

  3. Reliable vision-guided grasping

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.

  4. Driver hand activity analysis in naturalistic driving studies: challenges, algorithms, and experimental studies

    NASA Astrophysics Data System (ADS)

    Ohn-Bar, Eshed; Martin, Sujitha; Trivedi, Mohan Manubhai

    2013-10-01

    We focus on vision-based hand activity analysis in the vehicular domain. The study is motivated by the overarching goal of understanding driver behavior, in particular as it relates to attentiveness and risk. First, the unique advantages and challenges for a nonintrusive, vision-based solution are reviewed. Next, two approaches for hand activity analysis, one relying on static (appearance only) cues and another on dynamic (motion) cues, are compared. The motion-cue-based hand detection uses temporally accumulated edges in order to maintain the most reliable and relevant motion information. The accumulated image is fitted with ellipses in order to produce the location of the hands. The method is used to identify three hand activity classes: (1) two hands on the wheel, (2) hand on the instrument panel, (3) hand on the gear shift. The static-cue-based method extracts features in each frame in order to learn a hand presence model for each of the three regions. A second-stage classifier (linear support vector machine) produces the final activity classification. Experimental evaluation with different users and environmental variations under real-world driving shows the promise of applying the proposed systems for both postanalysis of captured driving data as well as for real-time driver assistance.

  5. Eye movement-invariant representations in the human visual system.

    PubMed

    Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L

    2017-01-01

    During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.

  6. Sensor Needs for Control and Health Management of Intelligent Aircraft Engines

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Gang, Sanjay; Hunter, Gary W.; Guo, Ten-Huei; Semega, Kenneth J.

    2004-01-01

    NASA and the U.S. Department of Defense are conducting programs which support the future vision of "intelligent" aircraft engines for enhancing the affordability, performance, operability, safety, and reliability of aircraft propulsion systems. Intelligent engines will have advanced control and health management capabilities enabling these engines to be self-diagnostic, self-prognostic, and adaptive to optimize performance based upon the current condition of the engine or the current mission of the vehicle. Sensors are a critical technology necessary to enable the intelligent engine vision as they are relied upon to accurately collect the data required for engine control and health management. This paper reviews the anticipated sensor requirements to support the future vision of intelligent engines from a control and health management perspective. Propulsion control and health management technologies are discussed in the broad areas of active component controls, propulsion health management and distributed controls. In each of these three areas individual technologies will be described, input parameters necessary for control feedback or health management will be discussed, and sensor performance specifications for measuring these parameters will be summarized.

  7. A method for work modeling at complex systems: towards applying information systems in family health care units.

    PubMed

    Jatobá, Alessandro; de Carvalho, Paulo Victor R; da Cunha, Amauri Marques

    2012-01-01

    Work in organizations requires a minimum level of consensus on the understanding of the practices performed. To adopt technological devices to support the activities in environments where work is complex, characterized by the interdependence among a large number of variables, understanding about how work is done not only takes an even greater importance, but also becomes a more difficult task. Therefore, this study aims to present a method for modeling of work in complex systems, which allows improving the knowledge about the way activities are performed where these activities do not simply happen by performing procedures. Uniting techniques of Cognitive Task Analysis with the concept of Work Process, this work seeks to provide a method capable of providing a detailed and accurate vision of how people perform their tasks, in order to apply information systems for supporting work in organizations.

  8. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  9. The Dorsal Visual System Predicts Future and Remembers Past Eye Position

    PubMed Central

    Morris, Adam P.; Bremmer, Frank; Krekelberg, Bart

    2016-01-01

    Eye movements are essential to primate vision but introduce potentially disruptive displacements of the retinal image. To maintain stable vision, the brain is thought to rely on neurons that carry both visual signals and information about the current direction of gaze in their firing rates. We have shown previously that these neurons provide an accurate representation of eye position during fixation, but whether they are updated fast enough during saccadic eye movements to support real-time vision remains controversial. Here we show that not only do these neurons carry a fast and accurate eye-position signal, but also that they support in parallel a range of time-lagged variants, including predictive and post dictive signals. We recorded extracellular activity in four areas of the macaque dorsal visual cortex during a saccade task, including the lateral and ventral intraparietal areas (LIP, VIP), and the middle temporal (MT) and medial superior temporal (MST) areas. As reported previously, neurons showed tonic eye-position-related activity during fixation. In addition, they showed a variety of transient changes in activity around the time of saccades, including relative suppression, enhancement, and pre-saccadic bursts for one saccade direction over another. We show that a hypothetical neuron that pools this rich population activity through a weighted sum can produce an output that mimics the true spatiotemporal dynamics of the eye. Further, with different pooling weights, this downstream eye position signal (EPS) could be updated long before (<100 ms) or after (<200 ms) an eye movement. The results suggest a flexible coding scheme in which downstream computations have access to past, current, and future eye positions simultaneously, providing a basis for visual stability and delay-free visually-guided behavior. PMID:26941617

  10. Emergence of a rehabilitation medicine model for low vision service delivery, policy, and funding.

    PubMed

    Stelmack, Joan

    2005-05-01

    A rehabilitation medicine model for low vision rehabilitation is emerging. There have been many challenges to reaching consensus on the roles of each discipline (optometry, ophthalmology, occupational therapy, and vision rehabilitation professionals) in the service delivery model and finding a place in the reimbursement system for all the providers. The history of low vision, legislation associated with Centers for Medicare and Medicaid Services coverage for vision rehabilitation, and research on the effectiveness of low vision service delivery are reviewed. Vision rehabilitation is now covered by Medicare under Physical Medicine and Rehabilitation codes by some Medicare carriers, yet reimbursement is not available for low vision devices or refraction. Also, the role of vision rehabilitation professionals (rehabilitation teachers, orientation and mobility specialists, and low vision therapists) in the model needs to be determined. In a recent systematic review of the scientific literature on the effectiveness of low vision services contracted by the Agency for Health Care Quality Research, no clinical trials were found. The literature consists primarily of longitudinal case studies, which provide weak support for third-party funding for vision rehabilitative services. Providers need to reach consensus on medical necessity, treatment plans, and protocols. Research on low vision outcomes is needed to develop an evidence base to guide clinical practice, policy, and funding decisions.

  11. The Glenn A. Fry Award Lecture 2012: Plasticity of the visual system following central vision loss.

    PubMed

    Chung, Susana T L

    2013-06-01

    Following the onset of central vision loss, most patients develop an eccentric retinal location outside the affected macular region, the preferred retinal locus (PRL), as their new reference for visual tasks. The first goal of this article is to present behavioral evidence showing the presence of experience-dependent plasticity in people with central vision loss. The evidence includes the presence of oculomotor re-referencing of fixational saccades to the PRL; the characteristics of the shape of the crowding zone (spatial region within which the presence of other objects affects the recognition of a target) at the PRL are more "foveal-like" instead of resembling those of the normal periphery; and the change in the shape of the crowding zone at a para-PRL location that includes a component referenced to the PRL. These findings suggest that there is a shift in the referencing locus of the oculomotor and the sensory visual system from the fovea to the PRL for people with central vision loss, implying that the visual system for these individuals is still plastic and can be modified through experiences. The second goal of the article is to demonstrate the feasibility of applying perceptual learning, which capitalizes on the presence of plasticity, as a tool to improve functional vision for people with central vision loss. Our finding that visual function could improve with perceptual learning presents an exciting possibility for the development of an alternative rehabilitative strategy for people with central vision loss.

  12. Obstacle Detection as a Safety Alert in Augmented Reality Models by the Use of Deep Learning Techniques

    PubMed Central

    Kęsik, Karolina; Książek, Kamil

    2017-01-01

    Augmented reality (AR) is becoming increasingly popular due to its numerous applications. This is especially evident in games, medicine, education, and other areas that support our everyday activities. Moreover, this kind of computer system not only improves our vision and our perception of the world that surrounds us, but also adds additional elements, modifies existing ones, and gives additional guidance. In this article, we focus on interpreting a reality-based real-time environment evaluation for informing the user about impending obstacles. The proposed solution is based on a hybrid architecture that is capable of estimating as much incoming information as possible. The proposed solution has been tested and discussed with respect to the advantages and disadvantages of different possibilities using this type of vision. PMID:29207564

  13. The Robotic Lunar Exploration Program (RLEP): An Introduction to the Goals, Approach, and Architecture

    NASA Technical Reports Server (NTRS)

    Watzin, James G.; Burt, Joseph; Tooley, Craig

    2004-01-01

    The Vision for Space Exploration calls for undertaking lunar exploration activities to enable sustained human and robotic exploration of Mars and beyond, including more distant destinations in the solar system. In support of this vision, the Robotic Lunar Exploration Program (RLEP) is expected to execute a series of robotic missions to the Moon, starting in 2008, in order to pave the way for further human space exploration. This paper will give an introduction to the RLEP program office, its role and its goals, and the approach it is taking to executing the charter of the program. The paper will also discuss candidate architectures that are being studied as a framework for defining the RLEP missions and the context in which they will evolve.

  14. Obstacle Detection as a Safety Alert in Augmented Reality Models by the Use of Deep Learning Techniques.

    PubMed

    Połap, Dawid; Kęsik, Karolina; Książek, Kamil; Woźniak, Marcin

    2017-12-04

    Augmented reality (AR) is becoming increasingly popular due to its numerous applications. This is especially evident in games, medicine, education, and other areas that support our everyday activities. Moreover, this kind of computer system not only improves our vision and our perception of the world that surrounds us, but also adds additional elements, modifies existing ones, and gives additional guidance. In this article, we focus on interpreting a reality-based real-time environment evaluation for informing the user about impending obstacles. The proposed solution is based on a hybrid architecture that is capable of estimating as much incoming information as possible. The proposed solution has been tested and discussed with respect to the advantages and disadvantages of different possibilities using this type of vision.

  15. Monitoring system of multiple fire fighting based on computer vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  16. America at the threshold. [Contains bibliography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-01-01

    On the 20th anniversary of the first lunar landing mission, Apollo 11, President Bush outlined a program that would put the United States on an aggressive track to return to the Moon to stay, and to land humans on Mars. The president's space policy calls for expanding human presence an activity beyond Earth orbit into the Solar System; obtaining scientific, technological and economic benefits for the American people; encouraging private sector participation in space; improving the quality of life on the Earth; strengthening national security; and promoting international cooperation in space. The Space Exploration Initiative accomplishes these goals. In Augustmore » 1989, NASA began an extensive review to summarize the technology and strategies for going back to the Moon and on to Mars. To obtain the final objective, major topical activities were defined. These activities were incremental capabilities to be achieved to fulfill the national space vision. They include: (1) moon waypoints (lunar exploration; preparation for mars; habitation; lunar based observation; fuels; energy to earth); (2) asteroids waypoints; and (3) mars waypoints. The six national space vision are (1) to increase our knowledge of solar system and beyond; (2) to rejuvenate interest in Science and engineering; (3) to refocus the US position in world leadership (from military to economic and scientific); (4) to develop technology with terrestrial application; (5) to facilitate further space exploration and commercialization; and, (6) to boost the US economy. 126 refs.« less

  17. 3D Geometrical Inspection of Complex Geometry Parts Using a Novel Laser Triangulation Sensor and a Robot

    PubMed Central

    Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge

    2011-01-01

    This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly “coupled” as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a “zero” or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy. PMID:22346569

  18. Status Report for Remediation Decision Support Project, Task 1, Activity 1.B – Physical and Hydraulic Properties Database and Interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rockhold, Mark L.

    2008-09-26

    The objective of Activity 1.B of the Remediation Decision Support (RDS) Project is to compile all available physical and hydraulic property data for sediments from the Hanford Site, to port these data into the Hanford Environmental Information System (HEIS), and to make the data web-accessible to anyone on the Hanford Local Area Network via the so-called Virtual Library. In past years efforts were made by RDS project staff to compile all available physical and hydraulic property data for Hanford sediments and to transfer these data into SoilVision{reg_sign}, a commercial geotechnical software package designed for storing, analyzing, and manipulating soils data.more » Although SoilVision{reg_sign} has proven to be useful, its access and use restrictions have been recognized as a limitation to the effective use of the physical and hydraulic property databases by the broader group of potential users involved in Hanford waste site issues. In order to make these data more widely available and useable, a decision was made to port them to HEIS and to make them web-accessible via a Virtual Library module. In FY08 the objectives of Activity 1.B of the RDS Project were to: (1) ensure traceability and defensibility of all physical and hydraulic property data currently residing in the SoilVision{reg_sign} database maintained by PNNL, (2) transfer the physical and hydraulic property data from the Microsoft Access database files used by SoilVision{reg_sign} into HEIS, which has most recently been maintained by Fluor-Hanford, Inc., (3) develop a Virtual Library module for accessing these data from HEIS, and (4) write a User's Manual for the Virtual Library module. The development of the Virtual Library module was to be performed by a third party under subcontract to Fluor. The intent of these activities is to make the available physical and hydraulic property data more readily accessible and useable by technical staff and operable unit managers involved in waste site assessments and remedial action decisions for Hanford. This status report describes the history of this development effort and progress to date.« less

  19. Research on moving object detection based on frog's eyes

    NASA Astrophysics Data System (ADS)

    Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan

    2008-12-01

    On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.

  20. Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications

    NASA Astrophysics Data System (ADS)

    Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon

    1997-04-01

    A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.

  1. Unification of automatic target tracking and automatic target recognition

    NASA Astrophysics Data System (ADS)

    Schachter, Bruce J.

    2014-06-01

    The subject being addressed is how an automatic target tracker (ATT) and an automatic target recognizer (ATR) can be fused together so tightly and so well that their distinctiveness becomes lost in the merger. This has historically not been the case outside of biology and a few academic papers. The biological model of ATT∪ATR arises from dynamic patterns of activity distributed across many neural circuits and structures (including retina). The information that the brain receives from the eyes is "old news" at the time that it receives it. The eyes and brain forecast a tracked object's future position, rather than relying on received retinal position. Anticipation of the next moment - building up a consistent perception - is accomplished under difficult conditions: motion (eyes, head, body, scene background, target) and processing limitations (neural noise, delays, eye jitter, distractions). Not only does the human vision system surmount these problems, but it has innate mechanisms to exploit motion in support of target detection and classification. Biological vision doesn't normally operate on snapshots. Feature extraction, detection and recognition are spatiotemporal. When vision is viewed as a spatiotemporal process, target detection, recognition, tracking, event detection and activity recognition, do not seem as distinct as they are in current ATT and ATR designs. They appear as similar mechanism taking place at varying time scales. A framework is provided for unifying ATT and ATR.

  2. Meditations on the new space vision: The moon as a stepping stone to mars

    NASA Astrophysics Data System (ADS)

    Mendell, W. W.

    2005-07-01

    The Vision for Space Exploration invokes activities on the Moon in preparation for exploration of Mars and also directs International Space Station (ISS) research toward the same goal. Lunar missions will emphasize development of capability and concomitant reduction of risk for future exploration of Mars. Earlier papers identified three critical issues related to the so-called NASA Mars Design Reference Mission (MDRM) to be addressed in the lunar context: (a) safety, health, and performance of the human crew; (b) various modalities of mission operations ranging surface activities to logistics, planning, and navigation; and (c) reliability and maintainability of systems in the planetary environment. In simple terms, lunar expeditions build a résumé that demonstrates the ability to design, construct, and operate an enterprise such as the MDRM with an expectation of mission success. We can evolve from Apollo-like missions to ones that resemble the complexity and duration of the MDRM. Investment in lunar resource utilization technologies falls naturally into the Vision. NASA must construct an exit strategy from the Moon in the third decade. With a mandate for continuing exploration, it cannot assume responsibility for long-term operation of lunar assets. Therefore, NASA must enter into a partnership with some other entity—governmental, international, or commercial—that can responsibly carry on lunar development past the exploration phase.

  3. Meditations on the new space vision: the Moon as a stepping stone to Mars.

    PubMed

    Mendell, W W

    2005-01-01

    The Vision for Space Exploration invokes activities on the Moon in preparation for exploration of Mars and also directs International Space Station (ISS) research toward the same goal. Lunar missions will emphasize development of capability and concomitant reduction of risk for future exploration of Mars. Earlier papers identified three critical issues related to the so-called NASA Mars Design Reference Mission (MDRM) to be addressed in the lunar context: (a) safety, health, and performance of the human crew; (b) various modalities of mission operations ranging surface activities to logistics, planning, and navigation; and (c) reliability and maintainability of systems in the planetary environment. In simple terms, lunar expeditions build a résumé that demonstrates the ability to design, construct, and operate an enterprise such as the MDRM with an expectation of mission success. We can evolve from Apollo-like missions to ones that resemble the complexity and duration of the MDRM. Investment in lunar resource utilization technologies falls naturally into the Vision. NASA must construct an exit strategy from the Moon in the third decade. With a mandate for continuing exploration, it cannot assume responsibility for long-term operation of lunar assets. Therefore, NASA must enter into a partnership with some other entity--governmental, international, or commercial--that can responsibly carry on lunar development past the exploration phase. Published by Elsevier Ltd.

  4. Universe exploration vision

    NASA Technical Reports Server (NTRS)

    O'Handley, D.; Swan, P.; Sadeh, W.

    1992-01-01

    U.S. space policy is discussed in terms of present and planned activities in the solar system and beyond to develop a concept for expanding space travel. The history of space exploration is briefly reviewed with references to the Mariner II, Apollo, and Discoverer programs. Attention is given to the issues related to return trips to the moon, sprint vs repetitive missions to Mars, and the implications of propulsion needs. The concept of terraforming other bodies within the solar system so that they can support human activity is identified as the next major phase of exploration. The following phase is considered to be the use of robotic or manned missions that extend beyond the solar system. Reference is given to a proposed Thousand Astronomical Units mission as a precursor to exploratory expansion into the universe, and current robotic mission activities are mentioned.

  5. An Energy Systems Perspective on Sustainability and the “Prosperous Way Down”

    EPA Science Inventory

    Energy Systems Theory provides a theoretical context for understanding, evaluating and interpreting shared social visions like “Growth”, “Sustainability” and “The Prosperous Way Down”. A social vision becomes dominant within society when a sufficient number of people recognize t...

  6. Highly polymorphic colour vision in a New World monkey with red facial skin, the bald uakari (Cacajao calvus).

    PubMed

    Corso, Josmael; Bowler, Mark; Heymann, Eckhard W; Roos, Christian; Mundy, Nicholas I

    2016-04-13

    Colour vision is highly variable in New World monkeys (NWMs). Evidence for the adaptive basis of colour vision in this group has largely centred on environmental features such as foraging benefits for differently coloured foods or predator detection, whereas selection on colour vision for sociosexual communication is an alternative hypothesis that has received little attention. The colour vision of uakaris (Cacajao) is of particular interest because these monkeys have the most dramatic red facial skin of any primate, as well as a unique fission/fusion social system and a specialist diet of seeds. Here, we investigate colour vision in a wild population of the bald uakari,C. calvus, by genotyping the X-linked opsin locus. We document the presence of a polymorphic colour vision system with an unprecedented number of functional alleles (six), including a novel allele with a predicted maximum spectral sensitivity of 555 nm. This supports the presence of strong balancing selection on different alleles at this locus. We consider different hypotheses to explain this selection. One possibility is that trichromacy functions in sexual selection, enabling females to choose high-quality males on the basis of red facial coloration. In support of this, there is some evidence that health affects facial coloration in uakaris, as well as a high prevalence of blood-borne parasitism in wild uakari populations. Alternatively, the low proportion of heterozygous female trichromats in the population may indicate selection on different dichromatic phenotypes, which might be related to cryptic food coloration. We have uncovered unexpected diversity in the last major lineage of NWMs to be assayed for colour vision, which will provide an interesting system to dissect adaptation of polymorphic trichromacy. © 2016 The Author(s).

  7. Comparison of a multispectral vision system and a colorimeter for the assessment of meat color.

    PubMed

    Trinderup, Camilla H; Dahl, Anders; Jensen, Kirsten; Carstensen, Jens Michael; Conradsen, Knut

    2015-04-01

    The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance properties, so several factors can influence the instrumental assessment of meat color. In order to assess whether two methods are equivalent, the variation due to these factors must be taken into account. A statistical analysis was conducted and showed that on a calibration sheet the two instruments are equally capable of measuring color. Moreover the vision system provides a more color rich assessment of fresh meat samples with a glossier surface, than the colorimeter. Careful studies of the different sources of variation enable an assessment of the order of magnitude of the variability between methods accounting for other sources of variation leading to the conclusion that color assessment using a multispectral vision system is superior to traditional colorimeter assessments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Operator-coached machine vision for space telerobotics

    NASA Technical Reports Server (NTRS)

    Bon, Bruce; Wilcox, Brian; Litwin, Todd; Gennery, Donald B.

    1991-01-01

    A prototype system for interactive object modeling has been developed and tested. The goal of this effort has been to create a system which would demonstrate the feasibility of high interactive operator-coached machine vision in a realistic task environment, and to provide a testbed for experimentation with various modes of operator interaction. The purpose for such a system is to use human perception where machine vision is difficult, i.e., to segment the scene into objects and to designate their features, and to use machine vision to overcome limitations of human perception, i.e., for accurate measurement of object geometry. The system captures and displays video images from a number of cameras, allows the operator to designate a polyhedral object one edge at a time by moving a 3-D cursor within these images, performs a least-squares fit of the designated edges to edge data detected with a modified Sobel operator, and combines the edges thus detected to form a wire-frame object model that matches the Sobel data.

  9. Handheld pose tracking using vision-inertial sensors with occlusion handling

    NASA Astrophysics Data System (ADS)

    Li, Juan; Slembrouck, Maarten; Deboeverie, Francis; Bernardos, Ana M.; Besada, Juan A.; Veelaert, Peter; Aghajan, Hamid; Casar, José R.; Philips, Wilfried

    2016-07-01

    Tracking of a handheld device's three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.

  10. Informational Leadership...Leading with the End in Mind

    ERIC Educational Resources Information Center

    Sommers, Denise

    2009-01-01

    The leadership of any organization is responsible for setting and communicating a mission, an inspiring vision and a set of core values. The leadership is also responsible for establishing a management system to achieve the missions and vision while adhering to core values. Many organizations do an excellent job of creating the mission, vision and…

  11. Information Leadership... Leading with the End in Mind

    ERIC Educational Resources Information Center

    Sommers, Denise

    2009-01-01

    The leadership of any organization is responsible for setting and communicating a mission, an inspiring vision and a set of core values. The leadership is also responsible for establishing a management system to achieve the missions and vision while adhering to core values. Many organizations do an excellent job of creating the mission, vision and…

  12. Benefit from NASA

    NASA Image and Video Library

    1985-01-01

    The NASA imaging processing technology, an advanced computer technique to enhance images sent to Earth in digital form by distant spacecraft, helped develop a new vision screening process. The Ocular Vision Screening system, an important step in preventing vision impairment, is a portable device designed especially to detect eye problems in children through the analysis of retinal reflexes.

  13. Health systems analysis of eye care services in Zambia: evaluating progress towards VISION 2020 goals.

    PubMed

    Bozzani, Fiammetta Maria; Griffiths, Ulla Kou; Blanchet, Karl; Schmidt, Elena

    2014-02-28

    VISION 2020 is a global initiative launched in 1999 to eliminate avoidable blindness by 2020. The objective of this study was to undertake a situation analysis of the Zambian eye health system and assess VISION 2020 process indicators on human resources, equipment and infrastructure. All eye health care providers were surveyed to determine location, financing sources, human resources and equipment. Key informants were interviewed regarding levels of service provision, management and leadership in the sector. Policy papers were reviewed. A health system dynamics framework was used to analyse findings. During 2011, 74 facilities provided eye care in Zambia; 39% were public, 37% private for-profit and 24% owned by Non-Governmental Organizations. Private facilities were solely located in major cities. A total of 191 people worked in eye care; 18 of these were ophthalmologists and eight cataract surgeons, equivalent to 0.34 and 0.15 per 250,000 population, respectively. VISION 2020 targets for inpatient beds and surgical theatres were met in six out of nine provinces, but human resources and spectacles manufacturing workshops were below target in every province. Inequalities in service provision between urban and rural areas were substantial. Shortage and maldistribution of human resources, lack of routine monitoring and inadequate financing mechanisms are the root causes of underperformance in the Zambian eye health system, which hinder the ability to achieve the VISION 2020 goals. We recommend that all VISION 2020 process indicators are evaluated simultaneously as these are not individually useful for monitoring progress.

  14. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  15. [Quality of life of visually impaired adults after low-vision intervention: a pilot study].

    PubMed

    Fintz, A-C; Gottenkiene, S; Speeg-Schatz, C

    2011-10-01

    To demonstrate the benefits of a low-vision intervention upon the quality of life of visually disabled adults. The survey was proposed to patients who sought a low-vision intervention at the Colmar and Strasbourg hospital centres over a period of 9 months. Patients in agreement with the survey were asked to complete the 25-item National Eye Institute Visual Function Questionnaire (NEI VFQ25) in interview format by telephone, once they had attended the first meeting and again 2 months after the end of the low-vision intervention. The low-vision intervention led to overall improvement as judged by the 25 items of the questionnaire. Some items involving visual function and psychological issues showed significant benefits: the patients reported a more optimistic score concerning their general vision, described better nearby activities, and felt a bit more autonomous. More than mainstream psychological counselling, low-vision services help patients cope with visual disabilities during their daily life. The low-vision intervention improves physical and technical issues necessary to retaining autonomy in daily life. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  16. A customized vision system for tracking humans wearing reflective safety clothing from industrial vehicles and machinery.

    PubMed

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J

    2014-09-26

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.

  17. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    PubMed Central

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  18. A digital retina-like low-level vision processor.

    PubMed

    Mertoguno, S; Bourbakis, N G

    2003-01-01

    This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.

  19. Flexible Wing Base Micro Aerial Vehicles: Vision-Guided Flight Stability and Autonomy for Micro Air Vehicles

    NASA Technical Reports Server (NTRS)

    Ettinger, Scott M.; Nechyba, Michael C.; Ifju, Peter G.; Wazak, Martin

    2002-01-01

    Substantial progress has been made recently towards design building and test-flying remotely piloted Micro Air Vehicle's (MAVs). We seek to complement this progress in overcoming the aerodynamic obstacles to.flight at very small scales with a vision stability and autonomy system. The developed system based on a robust horizon detection algorithm which we discuss in greater detail in a companion paper. In this paper, we first motivate the use of computer vision for MAV autonomy arguing that given current sensor technology, vision may he the only practical approach to the problem. We then briefly review our statistical vision-based horizon detection algorithm, which has been demonstrated at 30Hz with over 99.9% correct horizon identification. Next we develop robust schemes for the detection of extreme MAV attitudes, where no horizon is visible, and for the detection of horizon estimation errors, due to external factors such as video transmission noise. Finally, we discuss our feed-back controller for self-stabilized flight, and report results on vision autonomous flights of duration exceeding ten minutes.

  20. Ethical, environmental and social issues for machine vision in manufacturing industry

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Whelan, Paul F.

    1995-10-01

    Some of the ethical, environmental and social issues relating to the design and use of machine vision systems in manufacturing industry are highlighted. The authors' aim is to emphasize some of the more important issues, and raise general awareness of the need to consider the potential advantages and hazards of machine vision technology. However, in a short article like this, it is impossible to cover the subject comprehensively. This paper should therefore be seen as a discussion document, which it is hoped will provoke more detailed consideration of these very important issues. It follows from an article presented at last year's workshop. Five major topics are discussed: (1) The impact of machine vision systems on the environment; (2) The implications of machine vision for product and factory safety, the health and well-being of employees; (3) The importance of intellectual integrity in a field requiring a careful balance of advanced ideas and technologies; (4) Commercial and managerial integrity; and (5) The impact of machine visions technology on employment prospects, particularly for people with low skill levels.

Top