Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
Computer vision camera with embedded FPGA processing
NASA Astrophysics Data System (ADS)
Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel
2000-03-01
Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.
FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision
Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe
2011-01-01
Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069
FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.
Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe
2011-01-01
Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.
Implementing An Image Understanding System Architecture Using Pipe
NASA Astrophysics Data System (ADS)
Luck, Randall L.
1988-03-01
This paper will describe PIPE and how it can be used to implement an image understanding system. Image understanding is the process of developing a description of an image in order to make decisions about its contents. The tasks of image understanding are generally split into low level vision and high level vision. Low level vision is performed by PIPE -a high performance parallel processor with an architecture specifically designed for processing video images at up to 60 fields per second. High level vision is performed by one of several types of serial or parallel computers - depending on the application. An additional processor called ISMAP performs the conversion from iconic image space to symbolic feature space. ISMAP plugs into one of PIPE's slots and is memory mapped into the high level processor. Thus it forms the high speed link between the low and high level vision processors. The mechanisms for bottom-up, data driven processing and top-down, model driven processing are discussed.
NASA Astrophysics Data System (ADS)
Kuvychko, Igor
2001-10-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.
Dynamic programming and graph algorithms in computer vision.
Felzenszwalb, Pedro F; Zabih, Ramin
2011-04-01
Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.
Misimi, E; Mathiassen, J R; Erikson, U
2007-01-01
Computer vision method was used to evaluate the color of Atlantic salmon (Salmo salar) fillets. Computer vision-based sorting of fillets according to their color was studied on 2 separate groups of salmon fillets. The images of fillets were captured using a digital camera of high resolution. Images of salmon fillets were then segmented in the regions of interest and analyzed in red, green, and blue (RGB) and CIE Lightness, redness, and yellowness (Lab) color spaces, and classified according to the Roche color card industrial standard. Comparisons of fillet color between visual evaluations were made by a panel of human inspectors, according to the Roche SalmoFan lineal standard, and the color scores generated from computer vision algorithm showed that there were no significant differences between the methods. Overall, computer vision can be used as a powerful tool to sort fillets by color in a fast and nondestructive manner. The low cost of implementing computer vision solutions creates the potential to replace manual labor in fish processing plants with automation.
Dynamic Programming and Graph Algorithms in Computer Vision*
Felzenszwalb, Pedro F.; Zabih, Ramin
2013-01-01
Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950
Computational approaches to vision
NASA Technical Reports Server (NTRS)
Barrow, H. G.; Tenenbaum, J. M.
1986-01-01
Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.
Development of a Wireless Computer Vision Instrument to Detect Biotic Stress in Wheat
Casanova, Joaquin J.; O'Shaughnessy, Susan A.; Evett, Steven R.; Rush, Charles M.
2014-01-01
Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM). In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV), vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32) than stressed wheat (111.34). In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014), as did the conventional camera (p < 0.0001). Vegetation hue obtained through a wireless computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications. PMID:25251410
Advanced biologically plausible algorithms for low-level image processing
NASA Astrophysics Data System (ADS)
Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan
1999-08-01
At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.
Review On Applications Of Neural Network To Computer Vision
NASA Astrophysics Data System (ADS)
Li, Wei; Nasrabadi, Nasser M.
1989-03-01
Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.
Luo, Jiebo; Boutell, Matthew
2005-05-01
Automatic image orientation detection for natural images is a useful, yet challenging research topic. Humans use scene context and semantic object recognition to identify the correct image orientation. However, it is difficult for a computer to perform the task in the same way because current object recognition algorithms are extremely limited in their scope and robustness. As a result, existing orientation detection methods were built upon low-level vision features such as spatial distributions of color and texture. Discrepant detection rates have been reported for these methods in the literature. We have developed a probabilistic approach to image orientation detection via confidence-based integration of low-level and semantic cues within a Bayesian framework. Our current accuracy is 90 percent for unconstrained consumer photos, impressive given the findings of a psychophysical study conducted recently. The proposed framework is an attempt to bridge the gap between computer and human vision systems and is applicable to other problems involving semantic scene content understanding.
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
A fuzzy structural matching scheme for space robotics vision
NASA Technical Reports Server (NTRS)
Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka
1994-01-01
In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2003-08-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.
High-fidelity, low-cost, automated method to assess laparoscopic skills objectively.
Gray, Richard J; Kahol, Kanav; Islam, Gazi; Smith, Marshall; Chapital, Alyssa; Ferrara, John
2012-01-01
We sought to define the extent to which a motion analysis-based assessment system constructed with simple equipment could measure technical skill objectively and quantitatively. An "off-the-shelf" digital video system was used to capture the hand and instrument movement of surgical trainees (beginner level = PGY-1, intermediate level = PGY-3, and advanced level = PGY-5/fellows) while they performed a peg transfer exercise. The video data were passed through a custom computer vision algorithm that analyzed incoming pixels to measure movement smoothness objectively. The beginner-level group had the poorest performance, whereas those in the advanced group generated the highest scores. Intermediate-level trainees scored significantly (p < 0.04) better than beginner trainees. Advanced-level trainees scored significantly better than intermediate-level trainees and beginner-level trainees (p < 0.04 and p < 0.03, respectively). A computer vision-based analysis of surgical movements provides an objective basis for technical expertise-level analysis with construct validity. The technology to capture the data is simple, low cost, and readily available, and it obviates the need for expert human assessment in this setting. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Pyramidal neurovision architecture for vision machines
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1993-08-01
The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.
Low computation vision-based navigation for a Martian rover
NASA Technical Reports Server (NTRS)
Gavin, Andrew S.; Brooks, Rodney A.
1994-01-01
Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.
Akkas, Oguz; Lee, Cheng Hsien; Hu, Yu Hen; Harris Adamson, Carisa; Rempel, David; Radwin, Robert G
2017-12-01
Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.
Hierarchical layered and semantic-based image segmentation using ergodicity map
NASA Astrophysics Data System (ADS)
Yadegar, Jacob; Liu, Xiaoqing
2010-04-01
Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.
Smartphone, tablet computer and e-reader use by people with vision impairment.
Crossland, Michael D; Silva, Rui S; Macedo, Antonio F
2014-09-01
Consumer electronic devices such as smartphones, tablet computers, and e-book readers have become far more widely used in recent years. Many of these devices contain accessibility features such as large print and speech. Anecdotal experience suggests people with vision impairment frequently make use of these systems. Here we survey people with self-identified vision impairment to determine their use of this equipment. An internet-based survey was advertised to people with vision impairment by word of mouth, social media, and online. Respondents were asked demographic information, what devices they owned, what they used these devices for, and what accessibility features they used. One hundred and thirty-two complete responses were received. Twenty-six percent of the sample reported that they had no vision and the remainder reported they had low vision. One hundred and seven people (81%) reported using a smartphone. Those with no vision were as likely to use a smartphone or tablet as those with low vision. Speech was found useful by 59% of smartphone users. Fifty-one percent of smartphone owners used the camera and screen as a magnifier. Forty-eight percent of the sample used a tablet computer, and 17% used an e-book reader. The most frequently cited reason for not using these devices included cost and lack of interest. Smartphones, tablet computers, and e-book readers can be used by people with vision impairment. Speech is used by people with low vision as well as those with no vision. Many of our (self-selected) group used their smartphone camera and screen as a magnifier, and others used the camera flash as a spotlight. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
Deep hierarchies in the primate visual cortex: what can we learn for computer vision?
Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz
2013-08-01
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
Kriegeskorte, Nikolaus
2015-11-24
Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.
ERIC Educational Resources Information Center
Rosner, Yotam; Perlman, Amotz
2018-01-01
Introduction: The Israel Ministry of Social Affairs and Social Services subsidizes computer-based assistive devices for individuals with visual impairments (that is, those who are blind or have low vision) to assist these individuals in their interactions with computers and thus to enhance their independence and quality of life. The aim of this…
A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi
1997-01-01
A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.
... in Your Area Stories of Hope Videos Resources Low Vision Specialists Retinal Physicians My Retina Tracker Registry Genetic ... a treatment is discovered, help is available through low-vision aids, including optical, electronic, and computer-based devices. ...
Mid-Level Vision and Recognition of Non-Rigid Objects.
1993-01-01
and the author perhaps asked to account for its lack of rigor. In computer vision, the critic often requires that the author provide particular runs ...shown here where run at 4 x 1.5 deg. Note that it is unclear though if only even symmetric lters are needed for Contour Texture as proposed there for 2D...the contrast is low. However, coloring runs into problems if the contour is not fully connected or if the inner side of the contour is hard to
... magnifying reading glasses or loupes for seeing the computer screen , sheet music, or for sewing telescopic glasses ... for the Blind services. The Low Vision Pilot Project The American Foundation for the Blind (AFB) has ...
(Computer) Vision without Sight
Manduchi, Roberto; Coughlan, James
2012-01-01
Computer vision holds great promise for helping persons with blindness or visual impairments (VI) to interpret and explore the visual world. To this end, it is worthwhile to assess the situation critically by understanding the actual needs of the VI population and which of these needs might be addressed by computer vision. This article reviews the types of assistive technology application areas that have already been developed for VI, and the possible roles that computer vision can play in facilitating these applications. We discuss how appropriate user interfaces are designed to translate the output of computer vision algorithms into information that the user can quickly and safely act upon, and how system-level characteristics affect the overall usability of an assistive technology. Finally, we conclude by highlighting a few novel and intriguing areas of application of computer vision to assistive technology. PMID:22815563
Low Vision Aids and Low Vision Rehabilitation
... SeeingAI), magnify, or illuminate. Another app, EyeNote, is free for Apple products. It scans and identifies the denomination of U.S. paper money. Computers that can read aloud or magnify what ...
Wolff, J Gerard
2014-01-01
The SP theory of intelligence aims to simplify and integrate concepts in computing and cognition, with information compression as a unifying theme. This article is about how the SP theory may, with advantage, be applied to the understanding of natural vision and the development of computer vision. Potential benefits include an overall simplification of concepts in a universal framework for knowledge and seamless integration of vision with other sensory modalities and other aspects of intelligence. Low level perceptual features such as edges or corners may be identified by the extraction of redundancy in uniform areas in the manner of the run-length encoding technique for information compression. The concept of multiple alignment in the SP theory may be applied to the recognition of objects, and to scene analysis, with a hierarchy of parts and sub-parts, at multiple levels of abstraction, and with family-resemblance or polythetic categories. The theory has potential for the unsupervised learning of visual objects and classes of objects, and suggests how coherent concepts may be derived from fragments. As in natural vision, both recognition and learning in the SP system are robust in the face of errors of omission, commission and substitution. The theory suggests how, via vision, we may piece together a knowledge of the three-dimensional structure of objects and of our environment, it provides an account of how we may see things that are not objectively present in an image, how we may recognise something despite variations in the size of its retinal image, and how raster graphics and vector graphics may be unified. And it has things to say about the phenomena of lightness constancy and colour constancy, the role of context in recognition, ambiguities in visual perception, and the integration of vision with other senses and other aspects of intelligence.
Minkara, Mona S; Weaver, Michael N; Gorske, Jim; Bowers, Clifford R; Merz, Kenneth M
2015-08-11
There exists a sparse representation of blind and low-vision students in science, technology, engineering and mathematics (STEM) fields. This is due in part to these individuals being discouraged from pursuing STEM degrees as well as a lack of appropriate adaptive resources in upper level STEM courses and research. Mona Minkara is a rising fifth year graduate student in computational chemistry at the University of Florida. She is also blind. This account presents efforts conducted by an expansive team of university and student personnel in conjunction with Mona to adapt different portions of the graduate student curriculum to meet Mona's needs. The most important consideration is prior preparation of materials to assist with coursework and cumulative exams. Herein we present an account of the first four years of Mona's graduate experience hoping this will assist in the development of protocols for future blind and low-vision graduate students in computational chemistry.
Image Processing Occupancy Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only onmore » motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.« less
MER-DIMES : a planetary landing application of computer vision
NASA Technical Reports Server (NTRS)
Cheng, Yang; Johnson, Andrew; Matthies, Larry
2005-01-01
During the Mars Exploration Rovers (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combines measurements from a descent camera, a radar altimeter and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm uses altitude and attitude measurements to rectify image data to level ground plane. Feature selection and tracking is employed in the rectified data to compute the horizontal motion between images. Differences of motion estimates are then compared to inertial measurements to verify correct feature tracking. DIMES combines sensor data from multiple sources in a novel way to create a low-cost, robust and computationally efficient velocity estimation solution, and DIMES is the first use of computer vision to control a spacecraft during planetary landing. In this paper, the detailed implementation of the DIMES algorithm and the results from the two landings on Mars are presented.
Knowledge-based low-level image analysis for computer vision systems
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.
1988-01-01
Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.
Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.
Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe
2015-07-01
Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Relevance feedback-based building recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Allinson, Nigel M.
2010-07-01
Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.
Reading Digital with Low Vision
Legge, Gordon E.
2017-01-01
Reading difficulty is a major consequence of vision loss for more than four million Americans with low vision. Difficulty in accessing print imposes obstacles to education, employment, social interaction and recreation. In recent years, research in vision science has made major strides in understanding the impact of low vision on reading, and the dependence of reading performance on text properties. The ongoing transition to the production and distribution of digital documents brings about new opportunities for people with visual impairment. Digital documents on computers and mobile devices permit customization of print size, spacing, font style, contrast polarity and page layout to optimize reading displays for people with low vision. As a result, we now have unprecedented opportunities to adapt text format to meet the needs of visually impaired readers. PMID:29242668
Aslan, Ummuhan Bas; Calik, Bilge Basakcı; Kitiş, Ali
2012-01-01
This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between 8 and 16 years participated in the study. The physical activity level of cases was evaluated with a physical activity diary (PAD) and one-mile run/walk test (OMR-WT). No difference was found between the PAD and the OMR-WT results of low vision and blind children and adolescents. The visually impaired children and adolescents were detected not to participate in vigorous physical activity. A difference was found in favor of low vision boys in terms of mild, moderate activities and OMR-WT durations. However, no difference was found between physical activity levels of blind girls and boys. The results of our study suggested that the physical activity level of visually impaired children and adolescents was low, and gender affected physical activity in low vision children and adolescents. Copyright © 2012 Elsevier Ltd. All rights reserved.
Comparing visual representations across human fMRI and computational vision
Leeds, Daniel D.; Seibert, Darren A.; Pyles, John A.; Tarr, Michael J.
2013-01-01
Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from “interest points,” was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation. PMID:24273227
Using Multiple FPGA Architectures for Real-time Processing of Low-level Machine Vision Functions
Thomas H. Drayer; William E. King; Philip A. Araman; Joseph G. Tront; Richard W. Conners
1995-01-01
In this paper, we investigate the use of multiple Field Programmable Gate Array (FPGA) architectures for real-time machine vision processing. The use of FPGAs for low-level processing represents an excellent tradeoff between software and special purpose hardware implementations. A library of modules that implement common low-level machine vision operations is presented...
Generic decoding of seen and imagined objects using hierarchical visual features.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-05-22
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
Barriers to accessing low vision services.
Pollard, Tamara L; Simpson, John A; Lamoureux, Ecosse L; Keeffe, Jill E
2003-07-01
To investigate barriers to accessing low vision services in Australia. Adults with a vision impairment (<6/12 in the better eye and/or significant visual field defect), who were current patients at the Royal Victorian Eye and Ear Hospital (RVEEH), were interviewed. The questions investigated self-perceived vision difficulties, duration of vision loss and satisfaction with vision and also examined issues of awareness of low vision services and referral to services. Focus groups were also conducted with vision impaired (<6/12 in the better eye) patients from the RVEEH, listeners of the Radio for the Print Handicapped and peer workers at Vision Australia Foundation. The discussions were recorded and transcribed. The questionnaire revealed that referral to low vision services was associated with a greater degree of vision loss (p = 0.002) and a greater self-perception of low vision (p = 0.005) but that referral was not associated with satisfaction (p = 0.144) or difficulties related to vision (p = 0.169). Participants with mild and moderate vision impairment each reported similar levels of difficulties with daily activities and satisfaction with their vision (p > 0.05). However, there was a significant difference in the level of difficulties experienced with daily activities between those with mild-moderate and severe vision impairment (p < 0.05). The participants of the focus groups identified barriers to accessing low vision services related to awareness of services among the general public and eye care professionals, understanding of low vision and the services available, acceptance of low vision, the referral process, and transport. In addition to the expected difficulties with lack of awareness of services by people with low vision, many people do not understand what the services provide and do not identify themselves as having low vision. Knowledge of these barriers, from the perspective of people with low vision, can now be used to guide the development and content of future health-promotion campaigns.
Topographic Mapping of Residual Vision by Computer
ERIC Educational Resources Information Center
MacKeben, Manfred
2008-01-01
Many persons with low vision have diseases that damage the retina only in selected areas, which can lead to scotomas (blind spots) in perception. The most frequent of these diseases is age-related macular degeneration (AMD), in which foveal vision is often impaired by a central scotoma that impairs vision of fine detail and causes problems with…
Parameter Networks: Towards a Theory of Low-level Vision,
1981-04-01
8217Iels suc(h ,-s thiose shown in 1ligure 7 to reorganize origami wo.d- figures. Figoure?7. 1’o show an example In detail, Kender’s techn!Ciue for...Compuiter Science Dept, Carnegie-.Mcllon U., October 1979. Kanade, Tl., "A theory of Origami world," CMU-CS-78-144, Computer Science Dept, Carnegie
USC orthogonal multiprocessor for image processing with neural networks
NASA Astrophysics Data System (ADS)
Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid
1990-07-01
This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.
NASA Astrophysics Data System (ADS)
Van Damme, T.
2015-04-01
Computer Vision Photogrammetry allows archaeologists to accurately record underwater sites in three dimensions using simple twodimensional picture or video sequences, automatically processed in dedicated software. In this article, I share my experience in working with one such software package, namely PhotoScan, to record a Dutch shipwreck site. In order to demonstrate the method's reliability and flexibility, the site in question is reconstructed from simple GoPro footage, captured in low-visibility conditions. Based on the results of this case study, Computer Vision Photogrammetry compares very favourably to manual recording methods both in recording efficiency, and in the quality of the final results. In a final section, the significance of Computer Vision Photogrammetry is then assessed from a historical perspective, by placing the current research in the wider context of about half a century of successful use of Analytical and later Digital photogrammetry in the field of underwater archaeology. I conclude that while photogrammetry has been used in our discipline for several decades now, for various reasons the method was only ever used by a relatively small percentage of projects. This is likely to change in the near future since, compared to the `traditional' photogrammetry approaches employed in the past, today Computer Vision Photogrammetry is easier to use, more reliable and more affordable than ever before, while at the same time producing more accurate and more detailed three-dimensional results.
Model-based video segmentation for vision-augmented interactive games
NASA Astrophysics Data System (ADS)
Liu, Lurng-Kuo
2000-04-01
This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.
Integrated 3-D vision system for autonomous vehicles
NASA Astrophysics Data System (ADS)
Hou, Kun M.; Shawky, Mohamed; Tu, Xiaowei
1992-03-01
Nowadays, autonomous vehicles have become a multidiscipline field. Its evolution is taking advantage of the recent technological progress in computer architectures. As the development tools became more sophisticated, the trend is being more specialized, or even dedicated architectures. In this paper, we will focus our interest on a parallel vision subsystem integrated in the overall system architecture. The system modules work in parallel, communicating through a hierarchical blackboard, an extension of the 'tuple space' from LINDA concepts, where they may exchange data or synchronization messages. The general purpose processing elements are of different skills, built around 40 MHz i860 Intel RISC processors for high level processing and pipelined systolic array processors based on PLAs or FPGAs for low-level processing.
A digital retina-like low-level vision processor.
Mertoguno, S; Bourbakis, N G
2003-01-01
This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1992-01-01
The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
Enhanced computer vision with Microsoft Kinect sensor: a review.
Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie
2013-10-01
With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.
Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.
Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer
2015-11-01
Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.
Bener, Abdulbari; Al-Mahdi, Huda S; Vachhani, Pankit J; Al-Nufal, Mohammed; Ali, Awab I
2010-12-01
The aim of this study is to determine whether excessive internet use, television viewing and the ensuing poor lifestyle habits affect low vision in school children in a rapidly developing country. This is a cross-sectional study and 3000 school students aged between six and 18 years were approached and 2467 (82.2%) students participated. Of the studied school children 12.6 percent had low vision. Most of the low vision school children were in the 6-10 years age group and came from middle income backgrounds (41.8%; p = 0.008). A large proportion of the children with low vision spent ≥ 3 hours per day on the internet (48.2%; p< 0.001) and ≥ 3 hours reclining (62.4%; p < 0.001). A significantly smaller frequency of studied children with low vision participated in each of the reviewed forms of physical activity (p < 0.001) yet a larger proportion consumed fast food (86.8%; p < 0.001). Highly significant positive correlations were found between low vision and BMI, hours spent reclining and on the internet respectively. Blurred vision was the most commonly complained of symptom among the studied children (p < 0.001). The current study suggested a strong association between spending prolonged hours on the computer or TV, fast food eating, poor lifestyle habits and low vision.
Mogol, Burçe Ataç; Gökmen, Vural
2014-05-01
Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.
Quality grading of Atlantic salmon (Salmo salar) by computer vision.
Misimi, E; Erikson, U; Skavhaug, A
2008-06-01
In this study, we present a promising method of computer vision-based quality grading of whole Atlantic salmon (Salmo salar). Using computer vision, it was possible to differentiate among different quality grades of Atlantic salmon based on the external geometrical information contained in the fish images. Initially, before the image acquisition, the fish were subjectively graded and labeled into grading classes by a qualified human inspector in the processing plant. Prior to classification, the salmon images were segmented into binary images, and then feature extraction was performed on the geometrical parameters of the fish from the grading classes. The classification algorithm was a threshold-based classifier, which was designed using linear discriminant analysis. The performance of the classifier was tested by using the leave-one-out cross-validation method, and the classification results showed a good agreement between the classification done by human inspectors and by the computer vision. The computer vision-based method classified correctly 90% of the salmon from the data set as compared with the classification by human inspector. Overall, it was shown that computer vision can be used as a powerful tool to grade Atlantic salmon into quality grades in a fast and nondestructive manner by a relatively simple classifier algorithm. The low cost of implementation of today's advanced computer vision solutions makes this method feasible for industrial purposes in fish plants as it can replace manual labor, on which grading tasks still rely.
Variational optical flow estimation for images with spectral and photometric sensor diversity
NASA Astrophysics Data System (ADS)
Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin
2015-03-01
Motion estimation of objects in image sequences is an essential computer vision task. To this end, optical flow methods compute pixel-level motion, with the purpose of providing low-level input to higher-level algorithms and applications. Robust flow estimation is crucial for the success of applications, which in turn depends on the quality of the captured image data. This work explores the use of sensor diversity in the image data within a framework for variational optical flow. In particular, a custom image sensor setup intended for vehicle applications is tested. Experimental results demonstrate the improved flow estimation performance when IR sensitivity or flash illumination is added to the system.
NASA Technical Reports Server (NTRS)
Hung, Stephen H. Y.
1989-01-01
A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.
ERIC Educational Resources Information Center
Minkara, Mona S.; Weaver, Michael N.; Gorske, Jim; Bowers, Clifford R.; Merz, Kenneth M., Jr.
2015-01-01
There exists a sparse representation of blind and low-vision students in science, technology, engineering and mathematics (STEM) fields. This is due in part to these individuals being discouraged from pursuing STEM degrees as well as a lack of appropriate adaptive resources in upper level STEM courses and research. Mona Minkara is a rising fifth…
Salient contour extraction from complex natural scene in night vision image
NASA Astrophysics Data System (ADS)
Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lian-fa
2014-03-01
The theory of center-surround interaction in non-classical receptive field can be applied in night vision information processing. In this work, an optimized compound receptive field modulation method is proposed to extract salient contour from complex natural scene in low-light-level (LLL) and infrared images. The kernel idea is that multi-feature analysis can recognize the inhomogeneity in modulatory coverage more accurately and that center and surround with the grouping structure satisfying Gestalt rule deserves high connection-probability. Computationally, a multi-feature contrast weighted inhibition model is presented to suppress background and lower mutual inhibition among contour elements; a fuzzy connection facilitation model is proposed to achieve the enhancement of contour response, the connection of discontinuous contour and the further elimination of randomly distributed noise and texture; a multi-scale iterative attention method is designed to accomplish dynamic modulation process and extract contours of targets in multi-size. This work provides a series of biologically motivated computational visual models with high-performance for contour detection from cluttered scene in night vision images.
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Duda, R. O.; Fikes, R. E.; Hart, P. E.; Nilsson, N. J.; Thorndyke, P. W.; Wilber, B. M.
1971-01-01
Research in the field of artificial intelligence is discussed. The focus of recent work has been the design, implementation, and integration of a completely new system for the control of a robot that plans, learns, and carries out tasks autonomously in a real laboratory environment. The computer implementation of low-level and intermediate-level actions; routines for automated vision; and the planning, generalization, and execution mechanisms are reported. A scenario that demonstrates the approximate capabilities of the current version of the entire robot system is presented.
NASA Astrophysics Data System (ADS)
Razdan, Vikram; Bateman, Richard
2015-05-01
This study investigates the use of a Smartphone and its camera vision capabilities in Engineering metrology and flaw detection, with a view to develop a low cost alternative to Machine vision systems which are out of range for small scale manufacturers. A Smartphone has to provide a similar level of accuracy as Machine Vision devices like Smart cameras. The objective set out was to develop an App on an Android Smartphone, incorporating advanced Computer vision algorithms written in java code. The App could then be used for recording measurements of Twist Drill bits and hole geometry, and analysing the results for accuracy. A detailed literature review was carried out for in-depth study of Machine vision systems and their capabilities, including a comparison between the HTC One X Android Smartphone and the Teledyne Dalsa BOA Smart camera. A review of the existing metrology Apps in the market was also undertaken. In addition, the drilling operation was evaluated to establish key measurement parameters of a twist Drill bit, especially flank wear and diameter. The methodology covers software development of the Android App, including the use of image processing algorithms like Gaussian Blur, Sobel and Canny available from OpenCV software library, as well as designing and developing the experimental set-up for carrying out the measurements. The results obtained from the experimental set-up were analysed for geometry of Twist Drill bits and holes, including diametrical measurements and flaw detection. The results show that Smartphones like the HTC One X have the processing power and the camera capability to carry out metrological tasks, although dimensional accuracy achievable from the Smartphone App is below the level provided by Machine vision devices like Smart cameras. A Smartphone with mechanical attachments, capable of image processing and having a reasonable level of accuracy in dimensional measurement, has the potential to become a handy low-cost Machine vision system for small scale manufacturers, especially in field metrology and flaw detection.
Gradual cut detection using low-level vision for digital video
NASA Astrophysics Data System (ADS)
Lee, Jae-Hyun; Choi, Yeun-Sung; Jang, Ok-bae
1996-09-01
Digital video computing and organization is one of the important issues in multimedia system, signal compression, or database. Video should be segmented into shots to be used for identification and indexing. This approach requires a suitable method to automatically locate cut points in order to separate shot in a video. Automatic cut detection to isolate shots in a video has received considerable attention due to many practical applications; our video database, browsing, authoring system, retrieval and movie. Previous studies are based on a set of difference mechanisms and they measured the content changes between video frames. But they could not detect more special effects which include dissolve, wipe, fade-in, fade-out, and structured flashing. In this paper, a new cut detection method for gradual transition based on computer vision techniques is proposed. And then, experimental results applied to commercial video are presented and evaluated.
Automated design of image operators that detect interest points.
Trujillo, Leonardo; Olague, Gustavo
2008-01-01
This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research.
Real-time machine vision system using FPGA and soft-core processor
NASA Astrophysics Data System (ADS)
Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad
2012-06-01
This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.
Help for the Visually Impaired
NASA Technical Reports Server (NTRS)
1995-01-01
The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.
Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase
Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling
2015-01-01
In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate. PMID:26378533
Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase.
Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling
2015-09-10
In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate.
Proteus: a reconfigurable computational network for computer vision
NASA Astrophysics Data System (ADS)
Haralick, Robert M.; Somani, Arun K.; Wittenbrink, Craig M.; Johnson, Robert; Cooper, Kenneth; Shapiro, Linda G.; Phillips, Ihsin T.; Hwang, Jenq N.; Cheung, William; Yao, Yung H.; Chen, Chung-Ho; Yang, Larry; Daugherty, Brian; Lorbeski, Bob; Loving, Kent; Miller, Tom; Parkins, Larye; Soos, Steven L.
1992-04-01
The Proteus architecture is a highly parallel MIMD, multiple instruction, multiple-data machine, optimized for large granularity tasks such as machine vision and image processing The system can achieve 20 Giga-flops (80 Giga-flops peak). It accepts data via multiple serial links at a rate of up to 640 megabytes/second. The system employs a hierarchical reconfigurable interconnection network with the highest level being a circuit switched Enhanced Hypercube serial interconnection network for internal data transfers. The system is designed to use 256 to 1,024 RISC processors. The processors use one megabyte external Read/Write Allocating Caches for reduced multiprocessor contention. The system detects, locates, and replaces faulty subsystems using redundant hardware to facilitate fault tolerance. The parallelism is directly controllable through an advanced software system for partitioning, scheduling, and development. System software includes a translator for the INSIGHT language, a parallel debugger, low and high level simulators, and a message passing system for all control needs. Image processing application software includes a variety of point operators neighborhood, operators, convolution, and the mathematical morphology operations of binary and gray scale dilation, erosion, opening, and closing.
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1991-01-01
The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Deniz, Oscar; Vallez, Noelia; Espinosa-Aranda, Jose L; Rico-Saavedra, Jose M; Parra-Patino, Javier; Bueno, Gloria; Moloney, David; Dehghani, Alireza; Dunne, Aubrey; Pagani, Alain; Krauss, Stephan; Reiser, Ruben; Waeny, Martin; Sorci, Matteo; Llewellynn, Tim; Fedorczak, Christian; Larmoire, Thierry; Herbst, Marco; Seirafi, Andre; Seirafi, Kasra
2017-05-21
Embedded systems control and monitor a great deal of our reality. While some "classic" features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous "intelligence". Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities.
Deniz, Oscar; Vallez, Noelia; Espinosa-Aranda, Jose L.; Rico-Saavedra, Jose M.; Parra-Patino, Javier; Bueno, Gloria; Moloney, David; Dehghani, Alireza; Dunne, Aubrey; Pagani, Alain; Krauss, Stephan; Reiser, Ruben; Waeny, Martin; Sorci, Matteo; Llewellynn, Tim; Fedorczak, Christian; Larmoire, Thierry; Herbst, Marco; Seirafi, Andre; Seirafi, Kasra
2017-01-01
Embedded systems control and monitor a great deal of our reality. While some “classic” features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous “intelligence”. Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities. PMID:28531141
Crossland, Michael D; Thomas, Rachel; Unwin, Hilary; Bharani, Seelam; Gothwal, Vijaya K; Quartilho, Ana; Bunce, Catey; Dahlmann-Noor, Annegret
2017-06-21
Low vision and blindness adversely affect education and independence of children and young people. New 'assistive' technologies such as tablet computers can display text in enlarged font, read text out to the user, allow speech input and conversion into typed text, offer document and spreadsheet processing and give access to wide sources of information such as the internet. Research on these devices in low vision has been limited to case series. We will carry out a pilot randomised controlled trial (RCT) to assess the feasibility of a full RCT of assistive technologies for children/young people with low vision. We will recruit 40 students age 10-18 years in India and the UK, whom we will randomise 1:1 into two parallel groups. The active intervention will be Apple iPads; the control arm will be the local standard low-vision aid care. Primary outcomes will be acceptance/usage, accessibility of the device and trial feasibility measures (time to recruit children, lost to follow-up). Exploratory outcomes will be validated measures of vision-related quality of life for children/young people as well as validated measures of reading and educational outcomes. In addition, we will carry out semistructured interviews with the participants and their teachers. NRES reference 15/NS/0068; dissemination is planned via healthcare and education sector conferences and publications, as well as via patient support organisations. NCT02798848; IRAS ID 179658, UCL reference 15/0570. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Crossland, Michael D; Thomas, Rachel; Unwin, Hilary; Bharani, Seelam; Gothwal, Vijaya K; Quartilho, Ana; Bunce, Catey
2017-01-01
Introduction Low vision and blindness adversely affect education and independence of children and young people. New ‘assistive’ technologies such as tablet computers can display text in enlarged font, read text out to the user, allow speech input and conversion into typed text, offer document and spreadsheet processing and give access to wide sources of information such as the internet. Research on these devices in low vision has been limited to case series. Methods and analysis We will carry out a pilot randomised controlled trial (RCT) to assess the feasibility of a full RCT of assistive technologies for children/young people with low vision. We will recruit 40 students age 10–18 years in India and the UK, whom we will randomise 1:1 into two parallel groups. The active intervention will be Apple iPads; the control arm will be the local standard low-vision aid care. Primary outcomes will be acceptance/usage, accessibility of the device and trial feasibility measures (time to recruit children, lost to follow-up). Exploratory outcomes will be validated measures of vision-related quality of life for children/young people as well as validated measures of reading and educational outcomes. In addition, we will carry out semistructured interviews with the participants and their teachers. Ethics and dissemination NRES reference 15/NS/0068; dissemination is planned via healthcare and education sector conferences and publications, as well as via patient support organisations. Trial registration number NCT02798848; IRAS ID 179658, UCL reference 15/0570. PMID:28637740
Prevalence and causes of low vision and blindness in Baotou
Zhang, Guisen; Li, Yan; Teng, Xuelong; Wu, Qiang; Gong, Hui; Ren, Fengmei; Guo, Yuxia; Liu, Lei; Zhang, Han
2016-01-01
Abstract The aim of this study was to investigate the prevalence and causes of low vision and blindness in Baotou, Inner Mongolia. A cross-sectional study was carried out. Multistage sampling was used to select samples. The visual acuity was estimated using LogMAR and corrected by pinhole as best-corrected visual acuity. There were 7000 samples selected and 5770 subjects included in this investigation. The overall bilateral prevalence rates of low vision and blindness were 3.66% (95% CI: 3.17–4.14) and 0.99% (95% CI: 0.73–1.24), respectively. The prevalence of bilateral low vision, blindness, and visual impairment increased with age and decreased with education level. The main leading cause of low vision and blindness was cataract. Diabetic retinopathy and age-related macular degeneration were found to be the second leading causes of blindness in Baotou. The low vision and blindness were more prevalent in elderly people and subjects with low education level in Baotou. Cataract was the main cause for visual impairment and more attention should be paid to fundus diseases. In order to prevent blindness, much more eye care programs should be established. PMID:27631267
Prevalence and causes of low vision and blindness in Baotou: A cross-sectional study.
Zhang, Guisen; Li, Yan; Teng, Xuelong; Wu, Qiang; Gong, Hui; Ren, Fengmei; Guo, Yuxia; Liu, Lei; Zhang, Han
2016-09-01
The aim of this study was to investigate the prevalence and causes of low vision and blindness in Baotou, Inner Mongolia.A cross-sectional study was carried out. Multistage sampling was used to select samples. The visual acuity was estimated using LogMAR and corrected by pinhole as best-corrected visual acuity.There were 7000 samples selected and 5770 subjects included in this investigation. The overall bilateral prevalence rates of low vision and blindness were 3.66% (95% CI: 3.17-4.14) and 0.99% (95% CI: 0.73-1.24), respectively. The prevalence of bilateral low vision, blindness, and visual impairment increased with age and decreased with education level. The main leading cause of low vision and blindness was cataract. Diabetic retinopathy and age-related macular degeneration were found to be the second leading causes of blindness in Baotou.The low vision and blindness were more prevalent in elderly people and subjects with low education level in Baotou. Cataract was the main cause for visual impairment and more attention should be paid to fundus diseases. In order to prevent blindness, much more eye care programs should be established.
Wah, Win; Earnest, Arul; Sabanayagam, Charumathi; Cheng, Ching-Yu; Ong, Marcus Eng Hock; Wong, Tien Y.; Lamoureux, Ecosse L.
2015-01-01
Purpose To investigate the independent relationship of individual- and area-level socio-economic status (SES) with the presence and severity of visual impairment (VI) in an Asian population. Methods Cross-sectional data from 9993 Chinese, Malay and Indian adults aged 40–80 years who participated in the Singapore Epidemiology of eye Diseases (2004–2011) in Singapore. Based on the presenting visual acuity (PVA) in the better-seeing eye, VI was categorized into normal vision (logMAR≤0.30), low vision (logMAR>0.30<1.00), and blindness (logMAR≥1.00). Any VI was defined as low vision/blindness in the PVA of better-seeing eye. Individual-level low-SES was defined as a composite of primary-level education, monthly income<2000 SGD and residing in 1 or 2-room public apartment. An area-level SES was assessed using a socio-economic disadvantage index (SEDI), created using 12 variables from the 2010 Singapore census. A high SEDI score indicates a relatively poor SES. Associations between SES measures and presence and severity of VI were examined using multi-level, mixed-effects logistic and multinomial regression models. Results The age-adjusted prevalence of any VI was 19.62% (low vision = 19%, blindness = 0.62%). Both individual- and area-level SES were positively associated with any VI and low vision after adjusting for confounders. The odds ratio (95% confidence interval) of any VI was 2.11(1.88–2.37) for low-SES and 1.07(1.02–1.13) per 1 standard deviation increase in SEDI. When stratified by unilateral/bilateral categories, while low SES showed significant associations with all categories, SEDI showed a significant association with bilateral low vision only. The association between low SES and any VI remained significant among all age, gender and ethnic sub-groups. Although a consistent positive association was observed between area-level SEDI and any VI, the associations were significant among participants aged 40–65 years and male. Conclusion In this community-based sample of Asian adults, both individual- and area-level SES were independently associated with the presence and severity of VI. PMID:26555141
Bener, Abdulbari; Al-Mahdi, Huda S
2012-03-07
Little is known about the distribution of eye and vision conditions among school children in Qatar. The aim of the study was to examine the effects of excessive internet use and television viewing on low vision and its prevalence with socio-demographic characteristics. This is a cross-sectional study which was carried out in the public and private schools of the Ministry of Education and Higher Education of the State of Qatar from September 2009 to April 2010. A total of 3200 students aged 6-18 years were invited to take part of whom 2586 (80.8%) agreed. A questionnaire, that included questions about socio-demographic factors, internet use, and television viewing and computer games, co-morbid factors, and family history and vision assessment, was designed to collect information from the students. This was distributed by the school authorities. Of the school children studied (n=2586), 52.8% were girls and 47.2% boys. The overall prevalence of low vision was 15.2%. The prevalence of low vision was significantly higher in the age group 6-10 years (17.1%; P=0.05). Low vision was more prevalent among television viewers (17.2%) than in infrequent viewers (14.0%). The proportion of children wearing glasses was higher in frequent internet users and television viewers (21.3%). Also, low vision without aid was higher in frequent viewers. The study findings revealed a greater prevalence of low vision among frequent internet users and television viewers. The proportion of children wearing glasses was higher among frequent viewers. The prevalence of low vision decreased with increasing age.
Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.
NASA Astrophysics Data System (ADS)
Battiti, Roberto
1990-01-01
This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from multiple-purpose modules. In the last part of the thesis a well known optimization method (the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi -Newton method) is applied to simple classification problems and shown to be superior to the "error back-propagation" algorithm for numerical stability, automatic selection of parameters, and convergence properties.
ERIC Educational Resources Information Center
Aslan, Ummuhan Bas; Calik, Bilge Basakci; Kitis, Ali
2012-01-01
This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between…
Image Understanding Architecture
1991-09-01
architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers
Performance of computer vision in vivo flow cytometry with low fluorescence contrast
NASA Astrophysics Data System (ADS)
Markovic, Stacey; Li, Siyuan; Niedre, Mark
2015-03-01
Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20 cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models.
A multidisciplinary approach to solving computer related vision problems.
Long, Jennifer; Helland, Magne
2012-09-01
This paper proposes a multidisciplinary approach to solving computer related vision issues by including optometry as a part of the problem-solving team. Computer workstation design is increasing in complexity. There are at least ten different professions who contribute to workstation design or who provide advice to improve worker comfort, safety and efficiency. Optometrists have a role identifying and solving computer-related vision issues and in prescribing appropriate optical devices. However, it is possible that advice given by optometrists to improve visual comfort may conflict with other requirements and demands within the workplace. A multidisciplinary approach has been advocated for solving computer related vision issues. There are opportunities for optometrists to collaborate with ergonomists, who coordinate information from physical, cognitive and organisational disciplines to enact holistic solutions to problems. This paper proposes a model of collaboration and examples of successful partnerships at a number of professional levels including individual relationships between optometrists and ergonomists when they have mutual clients/patients, in undergraduate and postgraduate education and in research. There is also scope for dialogue between optometry and ergonomics professional associations. A multidisciplinary approach offers the opportunity to solve vision related computer issues in a cohesive, rather than fragmented way. Further exploration is required to understand the barriers to these professional relationships. © 2012 The College of Optometrists.
Low Cost Night Vision System for Intruder Detection
NASA Astrophysics Data System (ADS)
Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.
2016-02-01
The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.
Monitoring system of multiple fire fighting based on computer vision
NASA Astrophysics Data System (ADS)
Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke
2010-10-01
With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.
A nationwide population-based study of low vision and blindness in South Korea.
Park, Shin Hae; Lee, Ji Sung; Heo, Hwan; Suh, Young-Woo; Kim, Seung-Hyun; Lim, Key Hwan; Moon, Nam Ju; Lee, Sung Jin; Park, Song Hee; Baek, Seung-Hee
2014-12-18
To investigate the prevalence and associated risk factors of low vision and blindness in the Korean population. This cross-sectional, population-based study examined the ophthalmologic data of 22,135 Koreans aged ≥5 years from the fifth Korea National Health and Nutrition Examination Survey (KNHANES V, 2010-2012). According to the World Health Organization criteria, blindness was defined as visual acuity (VA) less than 20/400 in the better-seeing eye, and low vision as VA of 20/60 or worse but 20/400 or better in the better-seeing eye. The prevalence rates were calculated from either presenting VA (PVA) or best-corrected VA (BCVA). Multivariate regression analysis was conducted for adults aged ≥20 years. The overall prevalence rates of PVA-defined low vision and blindness were 4.98% and 0.26%, respectively, and those of BCVA-defined low vision and blindness were 0.46% and 0.05%, respectively. Prevalence increased rapidly above the age of 70 years. For subjects aged ≥70 years, the population-weighted prevalence rates of low vision, based on PVA and BCVA, were 12.85% and 3.87%, respectively, and the corresponding rates of blindness were 0.49% and 0.42%, respectively. The presenting vision problems were significantly associated with age (younger adults or elderly subjects), female sex, low educational level, and lowest household income, whereas the best-corrected vision problems were associated with age ≥ 70 years, a low educational level, and rural residence. This population-based study provides useful information for planning optimal public eye health care services in South Korea. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
Robotic space simulation integration of vision algorithms into an orbital operations simulation
NASA Technical Reports Server (NTRS)
Bochsler, Daniel C.
1987-01-01
In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.
Image/video understanding systems based on network-symbolic models
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2004-03-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.
Remote hardware-reconfigurable robotic camera
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.
2001-10-01
In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.
Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications
NASA Astrophysics Data System (ADS)
Olson, Gaylord G.; Walker, Jo N.
1997-09-01
Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.
Jordan, Timothy R; McGowan, Victoria A; Paterson, Kevin B
2014-06-01
When reading, low-level visual properties of text are acquired from central vision during brief fixational pauses, but the effectiveness of these properties may differ in older age. To investigate, a filtering technique displayed the low, medium, or high spatial frequencies of text falling within central vision as young (18-28 years) and older (65+ years) adults read. Reading times for normal text did not differ across age groups, but striking differences in the effectiveness of spatial frequencies were observed. Consequently, even when young and older adults read equally well, the effectiveness of spatial frequencies in central vision differs markedly in older age. PsycINFO Database Record (c) 2014 APA, all rights reserved.
The Next Generation of Personal Computers.
ERIC Educational Resources Information Center
Crecine, John P.
1986-01-01
Discusses factors converging to create high-capacity, low-cost nature of next generation of microcomputers: a coherent vision of what graphics workstation and future computing environment should be like; hardware developments leading to greater storage capacity at lower costs; and development of software and expertise to exploit computing power…
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-12
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
NASA Astrophysics Data System (ADS)
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1990-01-01
The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.
Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Young, Steven D.
2005-01-01
In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.
Bullying in German Adolescents: Attending Special School for Students with Visual Impairment
ERIC Educational Resources Information Center
Pinquart, Martin; Pfeiffer, Jens P.
2011-01-01
The present study analysed bullying in German adolescents with and without visual impairment. Ninety-eight adolescents with vision loss from schools for students with visual impairment, of whom 31 were blind and 67 had low vision, were compared with 98 sighted peers using a matched-pair design. Students with low vision reported higher levels of…
Aslam, Tariq M; Parry, Neil R A; Murray, Ian J; Salleh, Mahani; Col, Caterina Dal; Mirza, Naznin; Czanner, Gabriela; Tahir, Humza J
2016-05-01
Many eye diseases require on-going assessment for optimal management, creating an ever-increasing burden on patients and hospitals that could potentially be reduced through home vision monitoring. However, there is limited evidence for the utility of current applications and devices for this. To address this, we present a new automated, computer tablet-based method for self-testing near visual acuity (VA) for both high and low contrast targets. We report on its reliability and agreement with gold standard measures. The Mobile Assessment of Vision by intERactIve Computer (MAVERIC) system consists of a calibrated computer tablet housed in a bespoke viewing chamber. Purpose-built software automatically elicits touch-screen responses from subjects to measure their near VA for either low or high contrast acuity. Near high contrast acuity was measured using both the MAVERIC system and a near Landolt C chart in one eye for 81 patients and low contrast acuity using the MAVERIC system and a 25 % contrast near EDTRS chart in one eye of a separate 95 patients. The MAVERIC near acuity was also retested after 20 min to evaluate repeatability. Repeatability of both high and low contrast MAVERIC acuity measures, and their agreement with the chart tests, was assessed using the Bland-Altman comparison method. One hundred and seventy-three patients (96 %) completed the self- testing MAVERIC system without formal assistance. The resulting MAVERIC vision demonstrated good repeatability and good agreement with the gold-standard near chart measures. This study demonstrates the potential utility of the MAVERIC system for patients with ophthalmic disease to self-test their high and low contrast VA. The technique has a high degree of reliability and agreement with gold standard chart based measurements.
Treating presbyopia without spectacles
NASA Astrophysics Data System (ADS)
Xu, Renfeng
Both multifocal optics and small pupils can increase the depth of focus (DoF) of presbyopes. This thesis will evaluate some of the unique challenges faced by each of these two strategies. First, there is no single spherical refracting lens that can focus all parts of the pupil of an aberrated eye. What is the objective and subjective spherical refractive error (Rx) for such an eye, and does it vary with the amount of primary SA? Using both computational modeling and psychophysical methods, we found that high levels of positive Seidel SA caused both objective and subjective refractions to become myopic. Significantly, this refractive shift varied with stimulus spatial frequency and subjective criterion. Second, although secondary SA can dramatically expand DoF, we show that this is mostly due to the lower order components within this polynomial, which can also change spherical Rx. Also, the r6 term that defines secondary SA actually narrows rather than expands DoF, when in the presence of the r4 term within Z60. Finally, as retinal illuminance drops, neural thresholds are elevated due to increased problems of photon noise. We asked if the gains in near and distant vision of presbyopes anticipated at high light levels would be cancelled or even reversed at low light levels because of the additional reduction in retinal illuminance contributed by small pupils. We found that when light levels are > 2 cd/m2, a small pupil with a diameter of 2--3mm improves near image quality, near visual acuity, and near reading speed without significant loss of distance image quality and distance vision. This result gains added significance because we also showed that low light level text in the urban environment always has luminance levels > 2 cd/m2. In conclusion, both small pupils and multifocal optics face significant challenges as near vision aids for presbyopes. However, some of the confounding effects of elevated SA levels are avoided by using small pupils to expand DoF, which can provide improved near and distance vision at most light levels encountered while reading.
Oestrogen, ocular function and low-level vision: a review.
Hutchinson, Claire V; Walker, James A; Davidson, Colin
2014-11-01
Over the past 10 years, a literature has emerged concerning the sex steroid hormone oestrogen and its role in human vision. Herein, we review evidence that oestrogen (oestradiol) levels may significantly affect ocular function and low-level vision, particularly in older females. In doing so, we have examined a number of vision-related disorders including dry eye, cataract, increased intraocular pressure, glaucoma, age-related macular degeneration and Leber's hereditary optic neuropathy. In each case, we have found oestrogen, or lack thereof, to have a role. We have also included discussion of how oestrogen-related pharmacological treatments for menopause and breast cancer can impact the pathology of the eye and a number of psychophysical aspects of vision. Finally, we have reviewed oestrogen's pharmacology and suggest potential mechanisms underlying its beneficial effects, with particular emphasis on anti-apoptotic and vascular effects. © 2014 Society for Endocrinology.
Understanding and preventing computer vision syndrome.
Loh, Ky; Redd, Sc
2008-01-01
The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.
Recent advances in the development and transfer of machine vision technologies for space
NASA Technical Reports Server (NTRS)
Defigueiredo, Rui J. P.; Pendleton, Thomas
1991-01-01
Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.
Computer vision uncovers predictors of physical urban change.
Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L; Hidalgo, César A
2017-07-18
Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements-an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements-an observation that is consistent with "tipping" theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods-an observation that is consistent with the "invasion" theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities.
Computer vision uncovers predictors of physical urban change
Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L.; Hidalgo, César A.
2017-01-01
Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements—an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements—an observation that is consistent with “tipping” theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods—an observation that is consistent with the “invasion” theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities. PMID:28684401
Performance of computer vision in vivo flow cytometry with low fluorescence contrast
Markovic, Stacey; Li, Siyuan; Niedre, Mark
2015-01-01
Abstract. Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20 cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models. PMID:25822954
Connectionist Models and Parallelism in High Level Vision.
1985-01-01
GRANT NUMBER(s) Jerome A. Feldman N00014-82-K-0193 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENt. PROJECT, TASK Computer Science...Connectionist Models 2.1 Background and Overviev % Computer science is just beginning to look seriously at parallel computation : it may turn out that...the chair. The program includes intermediate level networks that compute more complex joints and ones that compute parallelograms in the image. These
[Ophthalmologist and "computer vision syndrome"].
Barar, A; Apatachioaie, Ioana Daniela; Apatachioaie, C; Marceanu-Brasov, L
2007-01-01
The authors had tried to collect the data available on the Internet about a subject that we consider as being totally ignored in the Romanian scientific literature and unexpectedly insufficiently treated in the specialized ophthalmologic literature. Known in the specialty literature under the generic name of "Computer vision syndrome", it is defined by the American Optometric Association as a complex of eye and vision problems related to the activities which stress the near vision and which are experienced in relation, or during, the use of the computer. During the consultations we hear frequent complaints of eye-strain - asthenopia, headaches, blurred distance and/or near vision, dry and irritated eyes, slow refocusing, neck and backache, photophobia, sensation of diplopia, light sensitivity, and double vision, but because of the lack of information, we overlooked them too easily, without going thoroughly into the real motives. In most of the developed countries, there are recommendations issued by renowned medical associations with regard to the definition, the diagnosis, and the methods for the prevention, treatment and periodical control of the symptoms found in computer users, in conjunction with an extremely detailed ergonomic legislation. We found out that these problems incite a much too low interest in our country. We would like to rouse the interest of our ophthalmologist colleagues in the understanding and the recognition of these symptoms and in their treatment, or at least their improvement, through specialized measures or through the cooperation with our specialist occupational medicine colleagues.
Optical needs of students with low vision in integrated schools of Nepal.
Gnyawali, Subodh; Shrestha, Jyoti Baba; Bhattarai, Dipesh; Upadhyay, Madan
2012-12-01
To identify the optical needs of students with low vision studying in the integrated schools for the blind in Nepal. A total of 779 blind and vision-impaired students studying in 67 integrated schools for the blind across Nepal were examined using the World Health Organization/Prevention of Blindness Eye Examination Record for Children with Blindness and Low Vision. Glasses and low-vision devices were provided to the students with low vision who showed improvement in visual acuity up to a level that was considered sufficient for classroom learning. Follow-up on the use and maintenance of device provided was done after a year. Almost 78% of students studying in the integrated schools for the blind were not actually blind; they had low vision. Five students were found to be wrongly enrolled. Avoidable causes of blindness were responsible for 41% of all blindness. Among 224 students who had visual acuity 1/60 or better, distance vision could be improved in 18.7% whereas near vision could be improved in 41.1% students. Optical intervention provided improved vision in 48.2% of students who were learning braille. Only 34.8% students were found to be using the devices regularly after assessment 1 year later; the most common causes for nonuse were damage or misplacement of the device. A high proportion of students with low vision in integrated schools could benefit from optical intervention. A system of comprehensive eye examination at the time of school enrollment would allow students with low vision to use their available vision to the fullest, encourage print reading over braille, ensure appropriate placement, and promote timely adoption and proper usage of optical device.
Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip
2015-07-01
Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological datamore » can be incorporated by means of data fusion of the two sensors' output data. (authors)« less
NASA Astrophysics Data System (ADS)
Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph
2016-09-01
CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.
Four Frames Suffice. A Provisionary Model of Vision and Space,
1982-09-01
0 * / Justifi ati AvailabilitY Codes 1. Introduction This paper is an attempt to specify’ a computationally and scientifically plausible model of how...abstract neural compuiting unit and a variety of construtions built of these units and their properties. All of this is part of the connectionist...chosen are inlerided to elucidate the nia’or scientific problems in intermediate level vision and would not be the best choice or a practical computer
Adjustable typography: an approach to enhancing low vision text accessibility.
Arditi, Aries
2004-04-15
Millions of people have low vision, a disability condition caused by uncorrectable or partially correctable disorders of the eye. The primary goal of low vision rehabilitation is increasing access to printed material. This paper describes how adjustable typography, a computer graphic approach to enhancing text accessibility, can play a role in this process, by allowing visually-impaired users to customize fonts to maximize legibility according to their own visual needs. Prototype software and initial testing of the concept is described. The results show that visually-impaired users tend to produce a variety of very distinct fonts, and that the adjustment process results in greatly enhanced legibility. But this initial testing has not yet demonstrated increases in legibility over and above the legibility of highly legible standard fonts such as Times New Roman.
Deemer, Ashley D; Massof, Robert W; Rovner, Barry W; Casten, Robin J; Piersol, Catherine V
2017-03-01
To compare the efficacy of behavioral activation (BA) plus low vision rehabilitation with an occupational therapist (OT-LVR) with supportive therapy (ST) on visual function in patients with age-related macular degeneration (AMD). Single-masked, attention-controlled, randomized clinical trial with AMD patients with subsyndromal depressive symptoms (n = 188). All subjects had two outpatient low vision rehabilitation optometry visits, then were randomized to in-home BA + OT-LVR or ST. Behavioral activation is a structured behavioral treatment aiming to increase adaptive behaviors and achieve valued goals. Supportive therapy is a nondirective, psychological treatment that provides emotional support and controls for attention. Functional vision was assessed with the activity inventory (AI) in which participants rate the difficulty level of goals and corresponding tasks. Participants were assessed at baseline and 4 months. Improvements in functional vision measures were seen in both the BA + OT-LVR and ST groups at the goal level (d = 0.71; d = 0.56 respectively). At the task level, BA + OT-LVR patients showed more improvement in reading, inside-the-home tasks and outside-the-home tasks, when compared to ST patients. The greatest effects were seen in the BA + OT-LVR group in subjects with a visual acuity ≥20/70 (d = 0.360 reading; d = 0.500 inside the home; d = 0.468 outside the home). Based on the trends of the AI data, we suggest that BA + OT-LVR services, provided by an OT in the patient's home following conventional low vision optometry services, are more effective than conventional optometric low vision services alone for those with mild visual impairment. (ClinicalTrials.gov number, NCT00769015.).
Vision based techniques for rotorcraft low altitude flight
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Suorsa, Ray; Smith, Philip
1991-01-01
An overview of research in obstacle detection at NASA Ames Research Center is presented. The research applies techniques from computer vision to automation of rotorcraft navigation. The development of a methodology for detecting the range to obstacles based on the maximum utilization of passive sensors is emphasized. The development of a flight and image data base for verification of vision-based algorithms, and a passive ranging methodology tailored to the needs of helicopter flight are discussed. Preliminary results indicate that it is possible to obtain adequate range estimates except at regions close to the FOE. Closer to the FOE, the error in range increases since the magnitude of the disparity gets smaller, resulting in a low SNR.
Can computational goals inform theories of vision?
Anderson, Barton L
2015-04-01
One of the most lasting contributions of Marr's posthumous book is his articulation of the different "levels of analysis" that are needed to understand vision. Although a variety of work has examined how these different levels are related, there is comparatively little examination of the assumptions on which his proposed levels rest, or the plausibility of the approach Marr articulated given those assumptions. Marr placed particular significance on computational level theory, which specifies the "goal" of a computation, its appropriateness for solving a particular problem, and the logic by which it can be carried out. The structure of computational level theory is inherently teleological: What the brain does is described in terms of its purpose. I argue that computational level theory, and the reverse-engineering approach it inspires, requires understanding the historical trajectory that gave rise to functional capacities that can be meaningfully attributed with some sense of purpose or goal, that is, a reconstruction of the fitness function on which natural selection acted in shaping our visual abilities. I argue that this reconstruction is required to distinguish abilities shaped by natural selection-"natural tasks" -from evolutionary "by-products" (spandrels, co-optations, and exaptations), rather than merely demonstrating that computational goals can be embedded in a Bayesian model that renders a particular behavior or process rational. Copyright © 2015 Cognitive Science Society, Inc.
Computer vision syndrome and ergonomic practices among undergraduate university students.
Mowatt, Lizette; Gordon, Carron; Santosh, Arvind Babu Rajendra; Jones, Thaon
2018-01-01
To determine the prevalence of computer vision syndrome (CVS) and ergonomic practices among students in the Faculty of Medical Sciences at The University of the West Indies (UWI), Jamaica. A cross-sectional study was done with a self-administered questionnaire. Four hundred and nine students participated; 78% were females. The mean age was 21.6 years. Neck pain (75.1%), eye strain (67%), shoulder pain (65.5%) and eye burn (61.9%) were the most common CVS symptoms. Dry eyes (26.2%), double vision (28.9%) and blurred vision (51.6%) were the least commonly experienced symptoms. Eye burning (P = .001), eye strain (P = .041) and neck pain (P = .023) were significantly related to level of viewing. Moderate eye burning (55.1%) and double vision (56%) occurred in those who used handheld devices (P = .001 and .007, respectively). Moderate blurred vision was reported in 52% who looked down at the device compared with 14.8% who held it at an angle. Severe eye strain occurred in 63% of those who looked down at a device compared with 21% who kept the device at eye level. Shoulder pain was not related to pattern of use. Ocular symptoms and neck pain were less likely if the device was held just below eye level. There is a high prevalence of Symptoms of CVS amongst university students which could be reduced, in particular neck pain and eye strain and burning, with improved ergonomic practices. © 2017 John Wiley & Sons Ltd.
Papadopoulos, Konstantinos
2014-03-01
In this study the impact of personal/individual characteristics (gender, vision status, age, age at loss of sight, recency of vision loss, education level, employment status, and ability of independent movement) in locus of control (LOC) and self-esteem were examined. Eighty-four young adults with visual impairments (42 with blindness and 42 with low vision) took part in this study. The significant predictors of self-esteem were vision status, age at loss of sight, recency of vision loss and educational level. Moreover, significant predictors of LOC were vision status and independent movement. Copyright © 2014 Elsevier Ltd. All rights reserved.
GPS Usage in a Population of Low-Vision Drivers.
Cucuras, Maria; Chun, Robert; Lee, Patrick; Jay, Walter M; Pusateri, Gregg
2017-01-01
We surveyed bioptic and non-bioptic low-vision drivers in Illinois, USA, to determine their usage of global positioning system (GPS) devices. Low-vision patients completed an IRB-approved phone survey regarding driving demographics and usage of GPS while driving. Participants were required to be active drivers with an Illinois driver's license, and met one of the following criteria: best-corrected visual acuity (BCVA) less than or equal to 20/40, central or significant peripheral visual field defects, or a combination of both. Of 27 low-vision drivers, 10 (37%) used GPS while driving. The average age for GPS users was 54.3 and for non-users was 77.6. All 10 drivers who used GPS while driving reported increased comfort or safety level. Since non-GPS users were significantly older than GPS users, it is likely that older participants would benefit from GPS technology training from their low-vision eye care professionals.
Federal regulation of vision enhancement devices for normal and abnormal vision
NASA Astrophysics Data System (ADS)
Drum, Bruce
2006-09-01
The Food and Drug Administration (FDA) evaluates the safety and effectiveness of medical devices and biological products as well as food and drugs. The FDA defines a device as a product that is intended, by physical means, to diagnose, treat, or prevent disease, or to affect the structure or function of the body. All vision enhancement devices fulfill this definition because they are intended to affect a function (vision) of the body. In practice, however, FDA historically has drawn a distinction between devices that are intended to enhance low vision as opposed to normal vision. Most low vision aids are therapeutic devices intended to compensate for visual impairment, and are actively regulated according to their level of risk to the patient. The risk level is usually low (e.g. Class I, exempt from 510(k) submission requirements for magnifiers that do not touch the eye), but can be as high as Class III (requiring a clinical trial and Premarket Approval (PMA) application) for certain implanted and prosthetic devices (e.g. intraocular telescopes and prosthetic retinal implants). In contrast, the FDA usually does not actively enforce its regulations for devices that are intended to enhance normal vision, are low risk, and do not have a medical intended use. However, if an implanted or prosthetic device were developed for enhancing normal vision, the FDA would likely decide to regulate it actively, because its intended use would entail a substantial medical risk to the user. Companies developing such devices should contact the FDA at an early stage to clarify their regulatory status.
Computer vision syndrome in presbyopia and beginning presbyopia: effects of spectacle lens type.
Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique
2015-05-01
This office field study investigated the effects of different types of spectacle lenses habitually worn by computer users with presbyopia and in the beginning stages of presbyopia. Computer vision syndrome was assessed through reported complaints and ergonomic conditions. A questionnaire regarding the type of habitually worn near-vision lenses at the workplace, visual conditions and the levels of different types of complaints was administered to 175 participants aged 35 years and older (mean ± SD: 52.0 ± 6.7 years). Statistical factor analysis identified five specific aspects of the complaints. Workplace conditions were analysed based on photographs taken in typical working conditions. In the subgroup of 25 users between the ages of 36 and 57 years (mean 44 ± 5 years), who wore distance-vision lenses and performed more demanding occupational tasks, the reported extents of 'ocular strain', 'musculoskeletal strain' and 'headache' increased with the daily duration of computer work and explained up to 44 per cent of the variance (rs = 0.66). In the other subgroups, this effect was smaller, while in the complete sample (n = 175), this correlation was approximately rs = 0.2. The subgroup of 85 general-purpose progressive lens users (mean age 54 years) adopted head inclinations that were approximately seven degrees more elevated than those of the subgroups with single vision lenses. The present questionnaire was able to assess the complaints of computer users depending on the type of spectacle lenses worn. A missing near-vision addition among participants in the early stages of presbyopia was identified as a risk factor for complaints among those with longer daily durations of demanding computer work. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.
Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique
2015-05-01
Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.
Survey of computer vision-based natural disaster warning systems
NASA Astrophysics Data System (ADS)
Ko, ByoungChul; Kwak, Sooyeong
2012-07-01
With the rapid development of information technology, natural disaster prevention is growing as a new research field dealing with surveillance systems. To forecast and prevent the damage caused by natural disasters, the development of systems to analyze natural disasters using remote sensing geographic information systems (GIS), and vision sensors has been receiving widespread interest over the last decade. This paper provides an up-to-date review of five different types of natural disasters and their corresponding warning systems using computer vision and pattern recognition techniques such as wildfire smoke and flame detection, water level detection for flood prevention, coastal zone monitoring, and landslide detection. Finally, we conclude with some thoughts about future research directions.
ERIC Educational Resources Information Center
Argyropoulos, Vassilis; Papadimitriou, Vassilios
2015-01-01
Introduction: The present study assesses the performance of students who are visually impaired (that is, those who are blind or have low vision) in braille reading accuracy and examines potential correlations among the error categories on the basis of gender, age at loss of vision, and level of education. Methods: Twenty-one visually impaired…
Near real-time stereo vision system
NASA Technical Reports Server (NTRS)
Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)
1993-01-01
The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.
NASA Astrophysics Data System (ADS)
Moore, Linda A.; Ferreira, Jannie T.
2003-03-01
Sports vision encompasses the visual assessment and provision of sports-specific visual performance enhancement and ocular protection for athletes of all ages, genders and levels of participation. In recent years, sports vision has been identified as one of the key performance indicators in sport. It is built on four main cornerstones: corrective eyewear, protective eyewear, visual skills enhancement and performance enhancement. Although clinically well established in the US, it is still a relatively new area of optometric specialisation elsewhere in the world and is gaining increasing popularity with eyecare practitioners and researchers. This research is often multi-disciplinary and involves input from a variety of subject disciplines, mainly those of optometry, medicine, physiology, psychology, physics, chemistry, computer science and engineering. Collaborative research projects are currently underway between staff of the Schools of Physics and Computing (DIT) and the Academy of Sports Vision (RAU).
Perceptual organization in computer vision - A review and a proposal for a classificatory structure
NASA Technical Reports Server (NTRS)
Sarkar, Sudeep; Boyer, Kim L.
1993-01-01
The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.
Adaptive Technology that Provides Access to Computers. DO-IT Program.
ERIC Educational Resources Information Center
Washington Univ., Seattle.
This brochure describes the different types of barriers individuals with mobility impairments, blindness, low vision, hearing impairments, and specific learning disabilities face in providing computer input, interpreting output, and reading documentation. The adaptive hardware and software that has been developed to provide functional alternatives…
Máthé, Koppány; Buşoniu, Lucian
2015-01-01
Unmanned aerial vehicles (UAVs) have gained significant attention in recent years. Low-cost platforms using inexpensive sensor payloads have been shown to provide satisfactory flight and navigation capabilities. In this report, we survey vision and control methods that can be applied to low-cost UAVs, and we list some popular inexpensive platforms and application fields where they are useful. We also highlight the sensor suites used where this information is available. We overview, among others, feature detection and tracking, optical flow and visual servoing, low-level stabilization and high-level planning methods. We then list popular low-cost UAVs, selecting mainly quadrotors. We discuss applications, restricting our focus to the field of infrastructure inspection. Finally, as an example, we formulate two use-cases for railway inspection, a less explored application field, and illustrate the usage of the vision and control techniques reviewed by selecting appropriate ones to tackle these use-cases. To select vision methods, we run a thorough set of experimental evaluations. PMID:26121608
Differentiation of Ecuadorian National and CCN-51 cocoa beans and their mixtures by computer vision.
Jimenez, Juan C; Amores, Freddy M; Solórzano, Eddyn G; Rodríguez, Gladys A; La Mantia, Alessandro; Blasi, Paolo; Loor, Rey G
2018-05-01
Ecuador exports two major types of cocoa beans, the highly regarded and lucrative National, known for its fine aroma, and the CCN-51 clone type, used in bulk for mass chocolate products. In order to discourage exportation of National cocoa adulterated with CCN-51, a fast and objective methodology for distinguishing between the two types of cocoa beans is needed. This study reports a methodology based on computer vision, which makes it possible to recognize these beans and determine the percentage of their mixture. The methodology was challenged with 336 samples of National cocoa and 127 of CCN-51. By excluding the samples with a low fermentation level and white beans, the model discriminated with a precision higher than 98%. The model was also able to identify and quantify adulterations in 75 export batches of National cocoa and separate out poorly fermented beans. A scientifically reliable methodology able to discriminate between Ecuadorian National and CCN-51 cocoa beans and their mixtures was successfully developed. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Mathematical modelling of animate and intentional motion.
Rittscher, Jens; Blake, Andrew; Hoogs, Anthony; Stein, Gees
2003-01-01
Our aim is to enable a machine to observe and interpret the behaviour of others. Mathematical models are employed to describe certain biological motions. The main challenge is to design models that are both tractable and meaningful. In the first part we will describe how computer vision techniques, in particular visual tracking, can be applied to recognize a small vocabulary of human actions in a constrained scenario. Mainly the problems of viewpoint and scale invariance need to be overcome to formalize a general framework. Hence the second part of the article is devoted to the question whether a particular human action should be captured in a single complex model or whether it is more promising to make extensive use of semantic knowledge and a collection of low-level models that encode certain motion primitives. Scene context plays a crucial role if we intend to give a higher-level interpretation rather than a low-level physical description of the observed motion. A semantic knowledge base is used to establish the scene context. This approach consists of three main components: visual analysis, the mapping from vision to language and the search of the semantic database. A small number of robust visual detectors is used to generate a higher-level description of the scene. The approach together with a number of results is presented in the third part of this article. PMID:12689374
Assistive technology for children and young people with low vision.
Thomas, Rachel; Barker, Lucy; Rubin, Gary; Dahlmann-Noor, Annegret
2015-06-18
Recent technological developments, such as the near universal spread of mobile phones and portable computers and improvements in the accessibility features of these devices, give children and young people with low vision greater independent access to information. Some electronic technologies, such as closed circuit TV, are well established low vision aids and newer versions, such as electronic readers or off-the shelf tablet computers, may offer similar functionalities with easier portability and at lower cost. To assess the effect of electronic assistive technologies on reading, educational outcomes and quality of life in children and young people with low vision. We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2014, Issue 9), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to October 2014), EMBASE (January 1980 to October 2014), the Health Technology Assessment Programme (HTA) (www.hta.ac.uk/), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 30 October 2014. We intended to include randomised controlled trials (RCTs) and quasi-RCTs in this review. We planned to include trials involving children between the ages of 5 and 16 years with low vision as defined by, or equivalent to, the WHO 1992 definition of low vision. We planned to include studies that explore the use of assistive technologies (ATs). These could include all types of closed circuit television/electronic vision enhancement systems (CCTV/EVES), computer technology including tablet computers and adaptive technologies such as screen readers, screen magnification and optical character recognition (OCR). We intended to compare the use of ATs with standard optical aids, which include distance refractive correction (with appropriate near addition for aphakic (no lens)/pseudophakic (with lens implant) patients) and monocular/binoculars for distance and brightfield magnifiers for near. We also planned to include studies that compare different types of ATs with each other, without or in addition to conventional optical aids, and those that compare ATs given with or without instructions for use. Independently, two review authors reviewed titles and abstracts for eligibility. They divided studies into categories to 'definitely include', 'definitely exclude' and 'possibly include', and the same two authors made final judgements about inclusion/exclusion by obtaining full-text copies of the studies in the 'possibly include' category. We did not identify any randomised controlled trials in this subject area. High-quality evidence about the usefulness of electronic AT for children and young people with visual impairment is needed to inform the choice healthcare and education providers and family have to make when selecting a technology. Randomised controlled trials are needed to assess the impact of AT. Research protocols should carefully select outcomes relevant not only to the scientific community, but more importantly to families and teachers. Functional outcomes such as reading accuracy, comprehension and speed should be recorded, as well as the impact of AT on independent learning and quality of life.
Weidling, Patrick; Jaschinski, Wolfgang
2015-01-01
When presbyopic employees are wearing general-purpose progressive lenses, they have clear vision only with a lower gaze inclination to the computer monitor, given the head assumes a comfortable inclination. Therefore, in the present intervention field study the monitor position was lowered, also with the aim to reduce musculoskeletal symptoms. A comparison group comprised users of lenses that do not restrict the field of clear vision. The lower monitor positions led the participants to lower their head inclination, which was linearly associated with a significant reduction in musculoskeletal symptoms. However, for progressive lenses a lower head inclination means a lower zone of clear vision, so that clear vision of the complete monitor was not achieved, rather the monitor should have been placed even lower. The procedures of this study may be useful for optimising the individual monitor position depending on the comfortable head and gaze inclination and the vertical zone of clear vision of progressive lenses. For users of general-purpose progressive lenses, it is suggested that low monitor positions allow for clear vision at the monitor and for a physiologically favourable head inclination. Employees may improve their workplace using a flyer providing ergonomic-optometric information.
Non-Boolean computing with nanomagnets for computer vision applications
NASA Astrophysics Data System (ADS)
Bhanja, Sanjukta; Karunaratne, D. K.; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep
2016-02-01
The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.
The loss and recovery of vertebrate vision examined in microplates.
Thorn, Robert J; Clift, Danielle E; Ojo, Oladele; Colwill, Ruth M; Creton, Robbert
2017-01-01
Regenerative medicine offers potentially ground-breaking treatments of blindness and low vision. However, as new methodologies are developed, a critical question will need to be addressed: how do we monitor in vivo for functional success? In the present study, we developed novel behavioral assays to examine vision in a vertebrate model system. In the assays, zebrafish larvae are imaged in multiwell or multilane plates while various red, green, blue, yellow or cyan objects are presented to the larvae on a computer screen. The assays were used to examine a loss of vision at 4 or 5 days post-fertilization and a gradual recovery of vision in subsequent days. The developed assays are the first to measure the loss and recovery of vertebrate vision in microplates and provide an efficient platform to evaluate novel treatments of visual impairment.
A comparison of symptoms after viewing text on a computer screen and hardcopy.
Chu, Christina; Rosenfield, Mark; Portello, Joan K; Benzoni, Jaclyn A; Collier, Juanita D
2011-01-01
Computer vision syndrome (CVS) is a complex of eye and vision problems experienced during or related to computer use. Ocular symptoms may include asthenopia, accommodative and vergence difficulties and dry eye. CVS occurs in up to 90% of computer workers, and given the almost universal use of these devices, it is important to identify whether these symptoms are specific to computer operation, or are simply a manifestation of performing a sustained near-vision task. This study compared ocular symptoms immediately following a sustained near task. 30 young, visually-normal subjects read text aloud either from a desktop computer screen or a printed hardcopy page at a viewing distance of 50 cm for a continuous 20 min period. Identical text was used in the two sessions, which was matched for size and contrast. Target viewing angle and luminance were similar for the two conditions. Immediately following completion of the reading task, subjects completed a written questionnaire asking about their level of ocular discomfort during the task. When comparing the computer and hardcopy conditions, significant differences in median symptom scores were reported with regard to blurred vision during the task (t = 147.0; p = 0.03) and the mean symptom score (t = 102.5; p = 0.04). In both cases, symptoms were higher during computer use. Symptoms following sustained computer use were significantly worse than those reported after hard copy fixation under similar viewing conditions. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will allow practitioners to optimize visual comfort and efficiency during computer operation.
Silva, Paolo S; Walia, Saloni; Cavallerano, Jerry D; Sun, Jennifer K; Dunn, Cheri; Bursell, Sven-Erik; Aiello, Lloyd M; Aiello, Lloyd Paul
2012-09-01
To compare agreement between diagnosis of clinical level of diabetic retinopathy (DR) and diabetic macular edema (DME) derived from nonmydriatic fundus images using a digital camera back optimized for low-flash image capture (MegaVision) compared with standard seven-field Early Treatment Diabetic Retinopathy Study (ETDRS) photographs and dilated clinical examination. Subject comfort and image acquisition time were also evaluated. In total, 126 eyes from 67 subjects with diabetes underwent Joslin Vision Network nonmydriatic retinal imaging. ETDRS photographs were obtained after pupillary dilation, and fundus examination was performed by a retina specialist. There was near-perfect agreement between MegaVision and ETDRS photographs (κ=0.81, 95% confidence interval [CI] 0.73-0.89) for clinical DR severity levels. Substantial agreement was observed with clinical examination (κ=0.71, 95% CI 0.62-0.80). For DME severity level there was near-perfect agreement with ETDRS photographs (κ=0.92, 95% CI 0.87-0.98) and moderate agreement with clinical examination (κ=0.58, 95% CI 0.46-0.71). The wider MegaVision 45° field led to identification of nonproliferative changes in areas not imaged by the 30° field of ETDRS photos. Field area unique to ETDRS photographs identified proliferative changes not visualized with MegaVision. Mean MegaVision acquisition time was 9:52 min. After imaging, 60% of subjects preferred the MegaVision lower flash settings. When evaluated using a rigorous protocol, images captured using a low-light digital camera compared favorably with ETDRS photography and clinical examination for grading level of DR and DME. Furthermore, these data suggest the importance of more extensive peripheral images and suggest that utilization of wide-field retinal imaging may further improve accuracy of DR assessment.
National survey of blindness and low vision in Lebanon
Mansour, A; Kassak, K.; Chaya, M.; Hourani, T.; Sibai, A.; Alameddine, M
1997-01-01
AIMS—To survey level of blindness and low vision in Lebanon. METHODS—A population survey was undertaken in 10 148 individuals to measure the prevalence and identify the causes of blindness in Lebanon. RESULTS—The prevalence of blindness was 0.6% and that of low vision 3.9%. The major causes of blindness were cataract (41.3%) and uncorrected large refractive error (12.6%). CONCLUSION—Most causes of blindness in Lebanon can be controlled by various educational and medical programmes. PMID:9486035
A computational visual saliency model based on statistics and machine learning.
Lin, Ru-Je; Lin, Wei-Song
2014-08-01
Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.
NASA Astrophysics Data System (ADS)
Di, Si; Lin, Hui; Du, Ruxu
2011-05-01
Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.
Nematzadeh, Nasim; Powers, David M W; Lewis, Trent W
2017-12-01
Why does our visual system fail to reconstruct reality, when we look at certain patterns? Where do Geometrical illusions start to emerge in the visual pathway? How far should we take computational models of vision with the same visual ability to detect illusions as we do? This study addresses these questions, by focusing on a specific underlying neural mechanism involved in our visual experiences that affects our final perception. Among many types of visual illusion, 'Geometrical' and, in particular, 'Tilt Illusions' are rather important, being characterized by misperception of geometric patterns involving lines and tiles in combination with contrasting orientation, size or position. Over the last decade, many new neurophysiological experiments have led to new insights as to how, when and where retinal processing takes place, and the encoding nature of the retinal representation that is sent to the cortex for further processing. Based on these neurobiological discoveries, we provide computer simulation evidence from modelling retinal ganglion cells responses to some complex Tilt Illusions, suggesting that the emergence of tilt in these illusions is partially related to the interaction of multiscale visual processing performed in the retina. The output of our low-level filtering model is presented for several types of Tilt Illusion, predicting that the final tilt percept arises from multiple-scale processing of the Differences of Gaussians and the perceptual interaction of foreground and background elements. The model is a variation of classical receptive field implementation for simple cells in early stages of vision with the scales tuned to the object/texture sizes in the pattern. Our results suggest that this model has a high potential in revealing the underlying mechanism connecting low-level filtering approaches to mid- and high-level explanations such as 'Anchoring theory' and 'Perceptual grouping'.
2013 Progress Report -- DOE Joint Genome Institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-11-01
In October 2012, we introduced a 10-Year Strategic Vision [http://bit.ly/JGI-Vision] for the Institute. A central focus of this Strategic Vision is to bridge the gap between sequenced genomes and an understanding of biological functions at the organism and ecosystem level. This involves the continued massive-scale generation of sequence data, complemented by orthogonal new capabilities to functionally annotate these large sequence data sets. Our Strategic Vision lays out a path to guide our decisions and ensure that the evolving set of experimental and computational capabilities available to DOE JGI users will continue to enable groundbreaking science.
European starlings use their acute vision to check on feline predators but not on conspecifics
Fernández-Juricic, Esteban
2018-01-01
Head movements allow birds with laterally placed eyes to move their centers of acute vision around and align them with objects of interest. Consequently, head movements have been used as indicator of fixation behavior (where gaze is maintained). However, studies on head movement behavior have not elucidated the degree to which birds use high-acuity or low-acuity vision. We studied how European starlings (Sturnus vulgaris) used high-acuity vision in the early stages of visual exploration of a stuffed cat (common terrestrial predator), a taxidermy Cooper’s hawk (common aerial predator), and a stuffed study skin of a conspecific. We found that starlings tended to use their high acuity vision when looking at predators, particularly, the cat was above chance levels. However, when they viewed a conspecific, they used high acuity vision as expected by chance. We did not observe a preference for the left or right center of acute vision. Our findings suggest that starlings exposed to a predator (particularly cats) may employ selective attention by using high-acuity vision to obtain quickly detailed information useful for a potential escape, but exposed to a social context may use divided attention by allocating similar levels high- and low-quality vision to monitor both conspecifics and the rest of the environment. PMID:29370164
Human body segmentation via data-driven graph cut.
Li, Shifeng; Lu, Huchuan; Shao, Xingqing
2014-11-01
Human body segmentation is a challenging and important problem in computer vision. Existing methods usually entail a time-consuming training phase for prior knowledge learning with complex shape matching for body segmentation. In this paper, we propose a data-driven method that integrates top-down body pose information and bottom-up low-level visual cues for segmenting humans in static images within the graph cut framework. The key idea of our approach is first to exploit human kinematics to search for body part candidates via dynamic programming for high-level evidence. Then, by using the body parts classifiers, obtaining bottom-up cues of human body distribution for low-level evidence. All the evidence collected from top-down and bottom-up procedures are integrated in a graph cut framework for human body segmentation. Qualitative and quantitative experiment results demonstrate the merits of the proposed method in segmenting human bodies with arbitrary poses from cluttered backgrounds.
Hamade, Noura; Hodge, William G; Rakibuz-Zaman, Muhammad; Malvankar-Mehta, Monali S
2016-01-01
Age related macular degeneration (AMD) is a progressive eye disease that, as of 2015, has affected 11 million people in the U.S. and 1.5 million in Canada causing central vision blindness. By 2050, this number is expected to double to 22 million. Eccentric vision is the target of low-vision rehabilitation aids and programs for patients with AMD, which are thought to improve functional performance by improving reading speed and depression. This study evaluates the effect of various low-vision rehabilitation strategies on reading speed and depression in patients 55 and older with AMD. Computer databases including MEDLINE (OVID), EMBASE (OVID), BIOSIS Previews (Thomson-Reuters), CINAHL (EBSCO), Health Economic Evaluations Database (HEED), ISI Web of Science (Thomson-Reuters) and the Cochrane Library (Wiley) were searched from the year 2000 to January 2015. Included papers were research studies with a sample size of 20 eyes or greater focused on AMD in adults aged 55 or older with low vision (20/60 or lower). Two independent reviewers screened and extracted relevant data from the included articles. Standardized mean difference (SMD) was chosen as an effect size to perform meta-analysis using STATA. Fixed- and random-effect models were developed based on heterogeneity. Reading Speed and Depression Scores. A total of 9 studies (885 subjects) were included. Overall, a significant improvement in reading speed was found with a SMD of 1.01 [95% CI: 0.05 to 1.97]. Low-vision rehabilitation strategies including micro-perimetric biofeedback, microscopes teaching program significantly improved reading speed. Eccentric viewing training showed the maximum improvement in reading speed. In addition, a non-significant improvement in depression scores was found with a SMD of -0.44 [95% CI: -0.96 to 0.09]. A considerable amount of research is required in the area of low-vision rehabilitation strategies for patients with AMD. Based on current research, low-vision rehabilitation aids improve reading speed. However, they do not have a significant effect on depression scores in those 55 and older with AMD.
Thirty Years After Marr's Vision: Levels of Analysis in Cognitive Science.
Peebles, David; Cooper, Richard P
2015-04-01
Thirty years after the publication of Marr's seminal book Vision (Marr, 1982) the papers in this topic consider the contemporary status of his influential conception of three distinct levels of analysis for information-processing systems, and in particular the role of the algorithmic and representational level with its cognitive-level concepts. This level has (either implicitly or explicitly) been downplayed or eliminated both by reductionist neuroscience approaches from below that seek to account for behavior from the implementation level and by Bayesian approaches from above that seek to account for behavior in purely computational-level terms. Copyright © 2015 Cognitive Science Society, Inc.
Computer vision syndrome: a review.
Blehm, Clayton; Vishnu, Seema; Khattak, Ashbala; Mitra, Shrabanee; Yee, Richard W
2005-01-01
As computers become part of our everyday life, more and more people are experiencing a variety of ocular symptoms related to computer use. These include eyestrain, tired eyes, irritation, redness, blurred vision, and double vision, collectively referred to as computer vision syndrome. This article describes both the characteristics and treatment modalities that are available at this time. Computer vision syndrome symptoms may be the cause of ocular (ocular-surface abnormalities or accommodative spasms) and/or extraocular (ergonomic) etiologies. However, the major contributor to computer vision syndrome symptoms by far appears to be dry eye. The visual effects of various display characteristics such as lighting, glare, display quality, refresh rates, and radiation are also discussed. Treatment requires a multidirectional approach combining ocular therapy with adjustment of the workstation. Proper lighting, anti-glare filters, ergonomic positioning of computer monitor and regular work breaks may help improve visual comfort. Lubricating eye drops and special computer glasses help relieve ocular surface-related symptoms. More work needs to be done to specifically define the processes that cause computer vision syndrome and to develop and improve effective treatments that successfully address these causes.
PIFEX: An advanced programmable pipelined-image processor
NASA Technical Reports Server (NTRS)
Gennery, D. B.; Wilcox, B.
1985-01-01
PIFEX is a pipelined-image processor being built in the JPL Robotics Lab. It will operate on digitized raster-scanned images (at 60 frames per second for images up to about 300 by 400 and at lesser rates for larger images), performing a variety of operations simultaneously under program control. It thus is a powerful, flexible tool for image processing and low-level computer vision. It also has applications in other two-dimensional problems such as route planning for obstacle avoidance and the numerical solution of two-dimensional partial differential equations (although its low numerical precision limits its use in the latter field). The concept and design of PIFEX are described herein, and some examples of its use are given.
Quaternions in computer vision and robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pervin, E.; Webb, J.A.
1982-01-01
Computer vision and robotics suffer from not having good tools for manipulating three-dimensional objects. Vectors, coordinate geometry, and trigonometry all have deficiencies. Quaternions can be used to solve many of these problems. Many properties of quaternions that are relevant to computer vision and robotics are developed. Examples are given showing how quaternions can be used to simplify derivations in computer vision and robotics.
NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.
Benchmarking neuromorphic vision: lessons learnt from computer vision
Tan, Cheston; Lallee, Stephane; Orchard, Garrick
2015-01-01
Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120
A robust color image fusion for low light level and infrared images
NASA Astrophysics Data System (ADS)
Liu, Chao; Zhang, Xiao-hui; Hu, Qing-ping; Chen, Yong-kang
2016-09-01
The low light level and infrared color fusion technology has achieved great success in the field of night vision, the technology is designed to make the hot target of fused image pop out with intenser colors, represent the background details with a nearest color appearance to nature, and improve the ability in target discovery, detection and identification. The low light level images have great noise under low illumination, and that the existing color fusion methods are easily to be influenced by low light level channel noise. To be explicit, when the low light level image noise is very large, the quality of the fused image decreases significantly, and even targets in infrared image would be submerged by the noise. This paper proposes an adaptive color night vision technology, the noise evaluation parameters of low light level image is introduced into fusion process, which improve the robustness of the color fusion. The color fuse results are still very good in low-light situations, which shows that this method can effectively improve the quality of low light level and infrared fused image under low illumination conditions.
Drew, Mark S.
2016-01-01
Cutaneous melanoma is the most life-threatening form of skin cancer. Although advanced melanoma is often considered as incurable, if detected and excised early, the prognosis is promising. Today, clinicians use computer vision in an increasing number of applications to aid early detection of melanoma through dermatological image analysis (dermoscopy images, in particular). Colour assessment is essential for the clinical diagnosis of skin cancers. Due to this diagnostic importance, many studies have either focused on or employed colour features as a constituent part of their skin lesion analysis systems. These studies range from using low-level colour features, such as simple statistical measures of colours occurring in the lesion, to availing themselves of high-level semantic features such as the presence of blue-white veil, globules, or colour variegation in the lesion. This paper provides a retrospective survey and critical analysis of contributions in this research direction. PMID:28096807
NASA Technical Reports Server (NTRS)
1996-01-01
PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.
Bi Sparsity Pursuit: A Paradigm for Robust Subspace Recovery
2016-09-27
16. SECURITY CLASSIFICATION OF: The success of sparse models in computer vision and machine learning is due to the fact that, high dimensional data...Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Signal recovery, Sparse learning , Subspace modeling REPORT DOCUMENTATION PAGE 11...vision and machine learning is due to the fact that, high dimensional data is distributed in a union of low dimensional subspaces in many real-world
Microscope self-calibration based on micro laser line imaging and soft computing algorithms
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.
2018-06-01
A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.
Effects of vision on head-putter coordination in golf.
Gonzalez, David Antonio; Kegel, Stefan; Ishikura, Tadao; Lee, Tim
2012-07-01
Low-skill golfers coordinate the movements of their head and putter with an allocentric, isodirectional coupling, which is opposite to the allocentric, antidirectional coordination pattern used by experts (Lee, Ishikura, Kegel, Gonzalez, & Passmore, 2008). The present study investigated the effects of four vision conditions (full vision, no vision, target focus, and ball focus) on head-putter coupling in low-skill golfers. Performance in the absence of vision resulted in a level of high isodirectional coupling that was similar to the full vision condition. However, when instructed to focus on the target during the putt, or focus on the ball through a restricted viewing angle, low-skill golfers significantly decoupled the head--putter coordination pattern. However, outcome measures demonstrated that target focus resulted in poorer performance compared with the other visual conditions, thereby providing overall support for use of a ball focus strategy to enhance coordination and outcome performance. Focus of attention and reduced visual tracking were hypothesized as potential reasons for the decoupling.
Computer vision techniques for rotorcraft low altitude flight
NASA Technical Reports Server (NTRS)
Sridhar, Banavar
1990-01-01
Rotorcraft operating in high-threat environments fly close to the earth's surface to utilize surrounding terrain, vegetation, or manmade objects to minimize the risk of being detected by an enemy. Increasing levels of concealment are achieved by adopting different tactics during low-altitude flight. Rotorcraft employ three tactics during low-altitude flight: low-level, contour, and nap-of-the-earth (NOE). The key feature distinguishing the NOE mode from the other two modes is that the whole rotorcraft, including the main rotor, is below tree-top whenever possible. This leads to the use of lateral maneuvers for avoiding obstacles, which in fact constitutes the means for concealment. The piloting of the rotorcraft is at best a very demanding task and the pilot will need help from onboard automation tools in order to devote more time to mission-related activities. The development of an automation tool which has the potential to detect obstacles in the rotorcraft flight path, warn the crew, and interact with the guidance system to avoid detected obstacles, presents challenging problems. Research is described which applies techniques from computer vision to automation of rotorcraft navigtion. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle-detection approach can be used as obstacle data for the obstacle avoidance in an automatic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. The presentation concludes with some comments on future work and how research in this area relates to the guidance of other autonomous vehicles.
Piaget's Water-Level Task: The Impact of Vision on Performance
ERIC Educational Resources Information Center
Papadopoulos, Konstantinos; Koustriava, Eleni
2011-01-01
In the present study, the aim was to examine the differences in performance between children and adolescents with visual impairment and sighted peers in the water-level task. Twenty-eight individuals with visual impairments, 14 individuals with blindness and 14 individuals with low vision, and 28 sighted individuals participated in the present…
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
The research of edge extraction and target recognition based on inherent feature of objects
NASA Astrophysics Data System (ADS)
Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo
2008-03-01
Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots fields. The results of simulation experiments and theory analyzing demonstrate that the proposed method could suppress noise effectively, extracted target edges robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.
ERIC Educational Resources Information Center
Beal, Carole R.; Rosenblum, L. Penny
2018-01-01
Introduction: The authors examined a tablet computer application (iPad app) for its effectiveness in helping students studying prealgebra to solve mathematical word problems. Methods: Forty-three visually impaired students (that is, those who are blind or have low vision) completed eight alternating mathematics units presented using their…
ERIC Educational Resources Information Center
Fichten, Catherine S.; Asuncion, Jennison V.; Barile, Maria; Ferraro, Vittoria; Wolforth, Joan
2009-01-01
This article presents the results of two studies on the accessibility of e-learning materials and other information and computer and communication technologies for 143 Canadian college and university students with low vision and 29 who were blind. It offers recommendations for enhancing access, creating new learning opportunities, and eliminating…
The Use of Spatialized Speech in Auditory Interfaces for Computer Users Who Are Visually Impaired
ERIC Educational Resources Information Center
Sodnik, Jaka; Jakus, Grega; Tomazic, Saso
2012-01-01
Introduction: This article reports on a study that explored the benefits and drawbacks of using spatially positioned synthesized speech in auditory interfaces for computer users who are visually impaired (that is, are blind or have low vision). The study was a practical application of such systems--an enhanced word processing application compared…
Machine Vision For Industrial Control:The Unsung Opportunity
NASA Astrophysics Data System (ADS)
Falkman, Gerald A.; Murray, Lawrence A.; Cooper, James E.
1984-05-01
Vision modules have primarily been developed to relieve those pressures newly brought into existence by Inspection (QUALITY) and Robotic (PRODUCTIVITY) mandates. Industrial Control pressure stems on the other hand from the older first industrial revolution mandate of throughput. Satisfying such pressure calls for speed in both imaging and decision making. Vision companies have, however, put speed on a backburner or ignore it entirely because most modules are computer/software based which limits their speed potential. Increasingly, the keynote being struck at machine vision seminars is that "Visual and Computational Speed Must Be Increased and Dramatically!" There are modular hardwired-logic systems that are fast but, all too often, they are not very bright. Such units: Measure the fill factor of bottles as they spin by, Read labels on cans, Count stacked plastic cups or Monitor the width of parts streaming past the camera. Many are only a bit more complex than a photodetector. Once in place, most of these units are incapable of simple upgrading to a new task and are Vision's analog to the robot industry's pick and place (RIA TYPE E) robot. Vision thus finds itself amidst the same quandries that once beset the Robot Industry of America when it tried to define a robot, excluded dumb ones, and was left with only slow machines whose unit volume potential is shatteringly low. This paper develops an approach to meeting the need of a vision system that cuts a swath into the terra incognita of intelligent, high-speed vision processing. Main attention is directed to vision for industrial control. Some presently untapped vision application areas that will be serviced include: Electronics, Food, Sports, Pharmaceuticals, Machine Tools and Arc Welding.
Remote sensing of vegetation structure using computer vision
NASA Astrophysics Data System (ADS)
Dandois, Jonathan P.
High-spatial resolution measurements of vegetation structure are needed for improving understanding of ecosystem carbon, water and nutrient dynamics, the response of ecosystems to a changing climate, and for biodiversity mapping and conservation, among many research areas. Our ability to make such measurements has been greatly enhanced by continuing developments in remote sensing technology---allowing researchers the ability to measure numerous forest traits at varying spatial and temporal scales and over large spatial extents with minimal to no field work, which is costly for large spatial areas or logistically difficult in some locations. Despite these advances, there remain several research challenges related to the methods by which three-dimensional (3D) and spectral datasets are joined (remote sensing fusion) and the availability and portability of systems for frequent data collections at small scale sampling locations. Recent advances in the areas of computer vision structure from motion (SFM) and consumer unmanned aerial systems (UAS) offer the potential to address these challenges by enabling repeatable measurements of vegetation structural and spectral traits at the scale of individual trees. However, the potential advances offered by computer vision remote sensing also present unique challenges and questions that need to be addressed before this approach can be used to improve understanding of forest ecosystems. For computer vision remote sensing to be a valuable tool for studying forests, bounding information about the characteristics of the data produced by the system will help researchers understand and interpret results in the context of the forest being studied and of other remote sensing techniques. This research advances understanding of how forest canopy and tree 3D structure and color are accurately measured by a relatively low-cost and portable computer vision personal remote sensing system: 'Ecosynth'. Recommendations are made for optimal conditions under which forest structure measurements should be obtained with UAS-SFM remote sensing. Ultimately remote sensing of vegetation by computer vision offers the potential to provide an 'ecologist's eye view', capturing not only canopy 3D and spectral properties, but also seeing the trees in the forest and the leaves on the trees.
Campana, Florence; Rebollo, Ignacio; Urai, Anne; Wyart, Valentin; Tallon-Baudry, Catherine
2016-05-11
The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. Is information encoded at different levels of the visual system (local details in low-level areas vs global shapes in high-level areas) equally likely to become conscious? We designed new hierarchical stimuli and provide the first empirical evidence based on behavioral and MEG data that global information encoded at high levels of the visual hierarchy dominates perception. This result held both in the presence and in the absence of task demands. The preferential emergence of percepts at high levels can account for two properties of conscious vision, namely, the dominance of global percepts and the feeling of visual richness reported independently of the perception of local details. Copyright © 2016 the authors 0270-6474/16/365200-14$15.00/0.
Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications
NASA Astrophysics Data System (ADS)
Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon
1997-04-01
A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.
Volumetric segmentation of range images for printed circuit board inspection
NASA Astrophysics Data System (ADS)
Van Dop, Erik R.; Regtien, Paul P. L.
1996-10-01
Conventional computer vision approaches towards object recognition and pose estimation employ 2D grey-value or color imaging. As a consequence these images contain information about projections of a 3D scene only. The subsequent image processing will then be difficult, because the object coordinates are represented with just image coordinates. Only complicated low-level vision modules like depth from stereo or depth from shading can recover some of the surface geometry of the scene. Recent advances in fast range imaging have however paved the way towards 3D computer vision, since range data of the scene can now be obtained with sufficient accuracy and speed for object recognition and pose estimation purposes. This article proposes the coded-light range-imaging method together with superquadric segmentation to approach this task. Superquadric segments are volumetric primitives that describe global object properties with 5 parameters, which provide the main features for object recognition. Besides, the principle axes of a superquadric segment determine the phase of an object in the scene. The volumetric segmentation of a range image can be used to detect missing, false or badly placed components on assembled printed circuit boards. Furthermore, this approach will be useful to recognize and extract valuable or toxic electronic components on printed circuit boards scrap that currently burden the environment during electronic waste processing. Results on synthetic range images with errors constructed according to a verified noise model illustrate the capabilities of this approach.
Lee, Junhwa; Lee, Kyoung-Chan; Cho, Soojin
2017-01-01
The displacement responses of a civil engineering structure can provide important information regarding structural behaviors that help in assessing safety and serviceability. A displacement measurement using conventional devices, such as the linear variable differential transformer (LVDT), is challenging owing to issues related to inconvenient sensor installation that often requires additional temporary structures. A promising alternative is offered by computer vision, which typically provides a low-cost and non-contact displacement measurement that converts the movement of an object, mostly an attached marker, in the captured images into structural displacement. However, there is limited research on addressing light-induced measurement error caused by the inevitable sunlight in field-testing conditions. This study presents a computer vision-based displacement measurement approach tailored to a field-testing environment with enhanced robustness to strong sunlight. An image-processing algorithm with an adaptive region-of-interest (ROI) is proposed to reliably determine a marker’s location even when the marker is indistinct due to unfavorable light. The performance of the proposed system is experimentally validated in both laboratory-scale and field experiments. PMID:29019950
Fully convolutional network with cluster for semantic segmentation
NASA Astrophysics Data System (ADS)
Ma, Xiao; Chen, Zhongbi; Zhang, Jianlin
2018-04-01
At present, image semantic segmentation technology has been an active research topic for scientists in the field of computer vision and artificial intelligence. Especially, the extensive research of deep neural network in image recognition greatly promotes the development of semantic segmentation. This paper puts forward a method based on fully convolutional network, by cluster algorithm k-means. The cluster algorithm using the image's low-level features and initializing the cluster centers by the super-pixel segmentation is proposed to correct the set of points with low reliability, which are mistakenly classified in great probability, by the set of points with high reliability in each clustering regions. This method refines the segmentation of the target contour and improves the accuracy of the image segmentation.
The Social Lives of Canadian Youths with Visual Impairments
ERIC Educational Resources Information Center
Gold, Deborah; Shaw, Alexander; Wolffe, Karen
2010-01-01
This survey of the social and leisure experiences of Canadian youths with visual impairments found that, in general, youths with low vision experienced more social challenges than did their peers who were blind. Levels of social support were not found to differ on the basis of level of vision, sex, or age. (Contains 1 figure and 1 table.)
Computer vision in the poultry industry
USDA-ARS?s Scientific Manuscript database
Computer vision is becoming increasingly important in the poultry industry due to increasing use and speed of automation in processing operations. Growing awareness of food safety concerns has helped add food safety inspection to the list of tasks that automated computer vision can assist. Researc...
[Comparison study between biological vision and computer vision].
Liu, W; Yuan, X G; Yang, C X; Liu, Z Q; Wang, R
2001-08-01
The development and bearing of biology vision in structure and mechanism were discussed, especially on the aspects including anatomical structure of biological vision, tentative classification of reception field, parallel processing of visual information, feedback and conformity effect of visual cortical, and so on. The new advance in the field was introduced through the study of the morphology of biological vision. Besides, comparison between biological vision and computer vision was made, and their similarities and differences were pointed out.
The Print and Computer Enlargement System--PACE. Final Report.
ERIC Educational Resources Information Center
Morford, Ronald A.
The Print and Computer Enlargement (PACE) System is being designed as a portable computerized reading and writing system that enables a low-vision person to read regular print and then create and edit text using large-print computerized output. The design goal was to develop a system that: weighed no more than 12 pounds so it could be easily…
Normative values for a tablet computer-based application to assess chromatic contrast sensitivity.
Bodduluri, Lakshmi; Boon, Mei Ying; Ryan, Malcolm; Dain, Stephen J
2018-04-01
Tablet computer displays are amenable for the development of vision tests in a portable form. Assessing color vision using an easily accessible and portable test may help in the self-monitoring of vision-related changes in ocular/systemic conditions and assist in the early detection of disease processes. Tablet computer-based games were developed with different levels of gamification as a more portable option to assess chromatic contrast sensitivity. Game 1 was designed as a clinical version with no gaming elements. Game 2 was a gamified version of game 1 (added fun elements: feedback, scores, and sounds) and game 3 was a complete game with vision task nested within. The current study aimed to determine the normative values and evaluate repeatability of the tablet computer-based games in comparison with an established test, the Cambridge Colour Test (CCT) Trivector test. Normally sighted individuals [N = 100, median (range) age 19.0 years (18-56 years)] had their chromatic contrast sensitivity evaluated binocularly using the three games and the CCT. Games 1 and 2 and the CCT showed similar absolute thresholds and tolerance intervals, and game 3 had significantly lower values than games 1, 2, and the CCT, due to visual task differences. With the exception of game 3 for blue-yellow, the CCT and tablet computer-based games showed similar repeatability with comparable 95% limits of agreement. The custom-designed games are portable, rapid, and may find application in routine clinical practice, especially for testing younger populations.
Convolutional networks for fast, energy-efficient neuromorphic computing
Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.
2016-01-01
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489
Convolutional networks for fast, energy-efficient neuromorphic computing.
Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S
2016-10-11
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
Reinforcement learning in computer vision
NASA Astrophysics Data System (ADS)
Bernstein, A. V.; Burnaev, E. V.
2018-04-01
Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.
USAF Summer Faculty Research Program. 1981 Research Reports. Volume I.
1981-10-01
Kent, OH 44242 (216) 672-2816 Dr. Martin D. Altschuler Degree: PhD, Physics and Astronomy, 1964 Associate Professor Specialty: Robot Vision, Surface...line inspection and control, computer- aided manufacturing, robot vision, mapping of machine parts and castings, etc. The technique we developed...posture, reduced healing time and bacteria level, and improved capacity for work endurance and efficiency. 1 ,2 Federal agencies, such as the FDA and
... present. CT = computed tomography; ECG = electrocardiography; ESR = erythrocyte sedimentation rate; MRI = magnetic resonance imaging. Spotlight on Aging: ... swelling is seen during the eye examination. Erythrocyte sedimentation rate (ESR) and C-reactive protein level (blood ...
The use of contact lenses in low vision rehabilitation: optical and therapeutic applications.
Vincent, Stephen J
2017-09-01
Ocular pathology that manifests at an early age has the potential to alter the vision-dependent emmetropisation mechanism, which co-ordinates ocular growth throughout childhood. The disruption of this feedback mechanism in children with congenital or early-onset visual impairment often results in the development of significant ametropia, including high levels of spherical refractive error, astigmatism and anisometropia. This review examines the use of contact lenses as a refractive correction, low vision aid and therapeutic intervention in the rehabilitation of patients with bilateral, irreversible visual loss due to congenital ocular disease. The advantages and disadvantages of the use of contact lenses for increased magnification (telescopes and microscopes) or field expansion (reverse telescopes) are discussed, along with the benefits and practical considerations for the correction of pathological high myopia. The historical and present use of therapeutic tinted contact lenses to reduce photosensitivity and nystagmus in achromatopsia, albinism and aniridia are also presented, including clinical considerations for the contact lens practitioner. In addition to the known optical benefits in comparison to spectacles for high levels of ametropia (an improved field of view for myopes and fewer inherent oblique aberrations), contact lenses may be of significant psycho-social benefit for patients with low vision, due to enhanced cosmesis and reduced conspicuity and potential related effects of improved self-esteem and peer acceptance. The contact lens correction of patients with congenital vision impairment can be challenging for both practitioner and patient but should be considered as a potential optical or therapeutic solution in modern low vision rehabilitation. © 2017 Optometry Australia.
2015-08-21
using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
Manifold learning in machine vision and robotics
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-02-01
Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.
Eye-related pain induced by visually demanding computer work.
Thorud, Hanne-Mari Schiøtz; Helland, Magne; Aarås, Arne; Kvikstad, Tor Martin; Lindberg, Lars Göran; Horgen, Gunnar
2012-04-01
Eye strain during visually demanding computer work may include glare and increased squinting. The latter may be related to elevated tension in the orbicularis oculi muscle and development of muscle pain. The aim of the study was to investigate the development of discomfort symptoms in relation to muscle activity and muscle blood flow in the orbicularis oculi muscle during computer work with visual strain. A group of healthy young adults with normal vision was randomly selected. Eye-related symptoms were recorded during a 2-h working session on a laptop. The participants were exposed to visual stressors such as glare and small font. Muscle load and blood flow were measured by electromyography and photoplethysmography, respectively. During 2 h of visually demanding computer work, there was a significant increase in the following symptoms: eye-related pain and tiredness, blurred vision, itchiness, gritty eyes, photophobia, dry eyes, and tearing eyes. Muscle load in orbicularis oculi was significantly increased above baseline and stable at 1 to 1.5% maximal voluntary contraction during the working sessions. Orbicularis oculi muscle blood flow increased significantly during the first part of the working sessions before returning to baseline. There were significant positive correlations between eye-related tiredness and orbicularis oculi muscle load and eye-related pain and muscle blood flow. Subjects who developed eye-related pain showed elevated orbicularis oculi muscle blood flow during computer work, but no differences in muscle load, compared with subjects with minimal pain symptoms. Eyestrain during visually demanding computer work is related to the orbicularis oculi muscle. Muscle pain development during demanding, low-force exercise is associated with increased muscle blood flow, possible secondary to different muscle activity pattern, and/or increased mental stress level in subjects experiencing pain compared with subjects with minimal pain.
HRV based health&sport markers using video from the face.
Capdevila, Lluis; Moreno, Jordi; Movellan, Javier; Parrado, Eva; Ramos-Castro, Juan
2012-01-01
Heart Rate Variability (HRV) is an indicator of health status in the general population and of adaptation to stress in athletes. In this paper we compare the performance of two systems to measure HRV: (1) A commercial system based on recording the physiological cardiac signal with (2) A computer vision system that uses a standard video images of the face to estimate RR from changes in skin color of the face. We show that the computer vision system performs surprisingly well. It estimates individual RR intervals in a non-invasive manner and with error levels comparable to those achieved by the physiological based system.
Computer vision for foreign body detection and removal in the food industry
USDA-ARS?s Scientific Manuscript database
Computer vision inspection systems are often used for quality control, product grading, defect detection and other product evaluation issues. This chapter focuses on the use of computer vision inspection systems that detect foreign bodies and remove them from the product stream. Specifically, we wi...
Chapter 11. Quality evaluation of apple by computer vision
USDA-ARS?s Scientific Manuscript database
Apple is one of the most consumed fruits in the world, and there is a critical need for enhanced computer vision technology for quality assessment of apples. This chapter gives a comprehensive review on recent advances in various computer vision techniques for detecting surface and internal defects ...
Deep Learning for Computer Vision: A Brief Review
Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios
2018-01-01
Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619
A computer vision for animal ecology.
Weinstein, Ben G
2018-05-01
A central goal of animal ecology is to observe species in the natural world. The cost and challenge of data collection often limit the breadth and scope of ecological study. Ecologists often use image capture to bolster data collection in time and space. However, the ability to process these images remains a bottleneck. Computer vision can greatly increase the efficiency, repeatability and accuracy of image review. Computer vision uses image features, such as colour, shape and texture to infer image content. I provide a brief primer on ecological computer vision to outline its goals, tools and applications to animal ecology. I reviewed 187 existing applications of computer vision and divided articles into ecological description, counting and identity tasks. I discuss recommendations for enhancing the collaboration between ecologists and computer scientists and highlight areas for future growth of automated image analysis. © 2017 The Author. Journal of Animal Ecology © 2017 British Ecological Society.
Goldstein, Judith E; Massof, Robert W; Deremeik, James T; Braudway, Sonya; Jackson, Mary Lou; Kehler, K Bradley; Primo, Susan A; Sunness, Janet S
2012-08-01
To characterize the traits of low vision patients who seek outpatient low vision rehabilitation (LVR) services in the United States. In a prospective observational study, we enrolled 764 new low vision patients seeking outpatient LVR services from 28 clinical centers in the United States. Before their initial appointment, multiple questionnaires assessing daily living and vision, physical, psychological, and cognitive health states were administered by telephone. Baseline clinical visual impairment measures and disorder diagnoses were recorded. Patients had a median age of 77 years, were primarily female (66%), and had macular disease (55%), most of which was nonneovascular age-related macular degeneration. More than one-third of the patients (37%) had mild vision impairment with habitual visual acuity (VA) of 20/60 or greater. The VA correlated well with contrast sensitivity (r = -0.52) but poorly with self-reported vision quality. The intake survey revealed self-reported physical health limitations, including decreased endurance (68%) and mobility problems (52%). Many patients reported increased levels of frustration (42%) and depressed mood (22%); memory and cognitive impairment (11%) were less frequently endorsed. Patients relied on others for daily living support (87%), but many (31%) still drove. Most patients seeking LVR are geriatric and have macular disease with relatively preserved VA. The disparity between VA and subjective quality of vision suggests that LVR referrals are based on symptoms rather than on VA alone. Patients seen for LVR services have significant physical, psychological, and cognitive disorders that can amplify vision disabilities and decrease rehabilitation potential.
Bio-inspired approach for intelligent unattended ground sensors
NASA Astrophysics Data System (ADS)
Hueber, Nicolas; Raymond, Pierre; Hennequin, Christophe; Pichler, Alexander; Perrot, Maxime; Voisin, Philippe; Moeglin, Jean-Pierre
2015-05-01
Improving the surveillance capacity over wide zones requires a set of smart battery-powered Unattended Ground Sensors capable of issuing an alarm to a decision-making center. Only high-level information has to be sent when a relevant suspicious situation occurs. In this paper we propose an innovative bio-inspired approach that mimics the human bi-modal vision mechanism and the parallel processing ability of the human brain. The designed prototype exploits two levels of analysis: a low-level panoramic motion analysis, the peripheral vision, and a high-level event-focused analysis, the foveal vision. By tracking moving objects and fusing multiple criteria (size, speed, trajectory, etc.), the peripheral vision module acts as a fast relevant event detector. The foveal vision module focuses on the detected events to extract more detailed features (texture, color, shape, etc.) in order to improve the recognition efficiency. The implemented recognition core is able to acquire human knowledge and to classify in real-time a huge amount of heterogeneous data thanks to its natively parallel hardware structure. This UGS prototype validates our system approach under laboratory tests. The peripheral analysis module demonstrates a low false alarm rate whereas the foveal vision correctly focuses on the detected events. A parallel FPGA implementation of the recognition core succeeds in fulfilling the embedded application requirements. These results are paving the way of future reconfigurable virtual field agents. By locally processing the data and sending only high-level information, their energy requirements and electromagnetic signature are optimized. Moreover, the embedded Artificial Intelligence core enables these bio-inspired systems to recognize and learn new significant events. By duplicating human expertise in potentially hazardous places, our miniature visual event detector will allow early warning and contribute to better human decision making.
Factors influencing hand/eye synchronicity in the computer age.
Grant, A H
1992-09-01
In using a computer, the relation of vision to hand/finger actuated keyboard usage in performing fine motor-coordinated functions is influenced by the physical location, size, and collective placement of the keys. Traditional nonprehensile flat/rectangular keyboard applications usually require a high and nearly constant level of visual attention. Biometrically shaped keyboards would allow for prehensile hand-posturing, thus affording better tactile familiarity with the keys, requiring less intense and less constant level of visual attention to the task, and providing a greater measure of freedom from having to visualize the key(s). Workpace and related physiological changes, aging, onset of monocularization (intermittent lapsing of binocularity for near vision) that accompanies presbyopia, tool colors, and background contrast are factors affecting constancy of visual attention to task performance. Capitas extension, excessive excyclotorsion, and repetitive strain injuries (such as carpal tunnel syndrome) are common and debilitating concomitants to computer usage. These problems can be remedied by improved keyboard design. The salutary role of mnemonics in minimizing visual dependency is discussed.
Vision, Educational Level, and Empowering Work Relationships.
ERIC Educational Resources Information Center
Johnson, G. M.
1995-01-01
Thirty-one machinists (blind, sighted, and visually impaired) answered questions about trust, resource sharing, and empowerment in work relationships. Employees with low vision were the least trusting and trusted, received the fewest shared resources, and reported proportionately more disempowering relationships. More educated employees saw more…
Emotion improves and impairs early vision.
Bocanegra, Bruno R; Zeelenberg, René
2009-06-01
Recent studies indicate that emotion enhances early vision, but the generality of this finding remains unknown. Do the benefits of emotion extend to all basic aspects of vision, or are they limited in scope? Our results show that the brief presentation of a fearful face, compared with a neutral face, enhances sensitivity for the orientation of subsequently presented low-spatial-frequency stimuli, but diminishes orientation sensitivity for high-spatial-frequency stimuli. This is the first demonstration that emotion not only improves but also impairs low-level vision. The selective low-spatial-frequency benefits are consistent with the idea that emotion enhances magnocellular processing. Additionally, we suggest that the high-spatial-frequency deficits are due to inhibitory interactions between magnocellular and parvocellular pathways. Our results suggest an emotion-induced trade-off in visual processing, rather than a general improvement. This trade-off may benefit perceptual dimensions that are relevant for survival at the expense of those that are less relevant.
Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam
2013-01-01
Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746
Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam
2013-07-17
Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.
Machine Learning, deep learning and optimization in computer vision
NASA Astrophysics Data System (ADS)
Canu, Stéphane
2017-03-01
As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.
Impact of Gamification of Vision Tests on the User Experience.
Bodduluri, Lakshmi; Boon, Mei Ying; Ryan, Malcolm; Dain, Stephen J
2017-08-01
Gamification has been incorporated into vision tests and vision therapies in the expectation that it may increase the user experience and engagement with the task. The current study aimed to understand how gamification affects the user experience, specifically during the undertaking of psychophysical tasks designed to estimate vision thresholds (chromatic and achromatic contrast sensitivity). Three tablet computer-based games were developed with three levels of gaming elements. Game 1 was designed to be a simple clinical test (no gaming elements), game 2 was similar to game 1 but with added gaming elements (i.e., feedback, scores, and sounds), and game 3 was a complete game. Participants (N = 144, age: 9.9-42 years) played three games in random order. The user experience for each game was assessed using a Short Feedback Questionnaire. The median (interquartile range) fun level for the three games was 2.5 (1.6), 3.9 (1.7), and 2.5 (2.8), respectively. Overall, participants reported greater fun level and higher preparedness to play the game again for game 2 than games 1 and 3 (P < 0.05). There were significant positive correlations observed between fun level and preparedness to play the game again for all the games (p < 0.05). Engagement (assessed as completion rates) did not differ between the games. Gamified version (game 2) was preferred to the other two versions. Over the short term, the careful application of gaming elements to vision tests was found to increase the fun level of users, without affecting engagement with the vision test.
Knowledge-based vision and simple visual machines.
Cliff, D; Noble, J
1997-01-01
The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684
A summary of image segmentation techniques
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly
1993-01-01
Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation.
Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.
Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe
2017-09-01
Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Feedforward object-vision models only tolerate small image variations compared to human
Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2014-01-01
Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986
Evaluation of tablet computers for visual function assessment.
Bodduluri, Lakshmi; Boon, Mei Ying; Dain, Stephen J
2017-04-01
Recent advances in technology and the increased use of tablet computers for mobile health applications such as vision testing necessitate an understanding of the behavior of the displays of such devices, to facilitate the reproduction of existing or the development of new vision assessment tests. The purpose of this study was to investigate the physical characteristics of one model of tablet computer (iPad mini Retina display) with regard to display consistency across a set of devices (15) and their potential application as clinical vision assessment tools. Once the tablet computer was switched on, it required about 13 min to reach luminance stability, while chromaticity remained constant. The luminance output of the device remained stable until a battery level of 5%. Luminance varied from center to peripheral locations of the display and with viewing angle, whereas the chromaticity did not vary. A minimal (1%) variation in luminance was observed due to temperature, and once again chromaticity remained constant. Also, these devices showed good temporal stability of luminance and chromaticity. All 15 tablet computers showed gamma functions approximating the standard gamma (2.20) and showed similar color gamut sizes, except for the blue primary, which displayed minimal variations. The physical characteristics across the 15 devices were similar and are known, thereby facilitating the use of this model of tablet computer as visual stimulus displays.
3-D Signal Processing in a Computer Vision System
Dongping Zhu; Richard W. Conners; Philip A. Araman
1991-01-01
This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...
An overview of computer vision
NASA Technical Reports Server (NTRS)
Gevarter, W. B.
1982-01-01
An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.
Experiences Using an Open Source Software Library to Teach Computer Vision Subjects
ERIC Educational Resources Information Center
Cazorla, Miguel; Viejo, Diego
2015-01-01
Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…
NASA Astrophysics Data System (ADS)
Tekin, Tolga; Töpper, Michael; Reichl, Herbert
2009-05-01
Technological frontiers between semiconductor technology, packaging, and system design are disappearing. Scaling down geometries [1] alone does not provide improvement of performance, less power, smaller size, and lower cost. It will require "More than Moore" [2] through the tighter integration of system level components at the package level. System-in-Package (SiP) will deliver the efficient use of three dimensions (3D) through innovation in packaging and interconnect technology. A key bottleneck to the implementation of high-performance microelectronic systems, including SiP, is the lack of lowlatency, high-bandwidth, and high density off-chip interconnects. Some of the challenges in achieving high-bandwidth chip-to-chip communication using electrical interconnects include the high losses in the substrate dielectric, reflections and impedance discontinuities, and susceptibility to crosstalk [3]. Obviously, the incentive for the use of photonics to overcome the challenges and leverage low-latency and highbandwidth communication will enable the vision of optical computing within next generation architectures. Supercomputers of today offer sustained performance of more than petaflops, which can be increased by utilizing optical interconnects. Next generation computing architectures are needed with ultra low power consumption; ultra high performance with novel interconnection technologies. In this paper we will discuss a CMOS compatible underlying technology to enable next generation optical computing architectures. By introducing a new optical layer within the 3D SiP, the development of converged microsystems, deployment for next generation optical computing architecture will be leveraged.
A study of computer-related upper limb discomfort and computer vision syndrome.
Sen, A; Richardson, Stanley
2007-12-01
Personal computers are one of the commonest office tools in Malaysia today. Their usage, even for three hours per day, leads to a health risk of developing Occupational Overuse Syndrome (OOS), Computer Vision Syndrome (CVS), low back pain, tension headaches and psychosocial stress. The study was conducted to investigate how a multiethnic society in Malaysia is coping with these problems that are increasing at a phenomenal rate in the west. This study investigated computer usage, awareness of ergonomic modifications of computer furniture and peripherals, symptoms of CVS and risk of developing OOS. A cross-sectional questionnaire study of 136 computer users was conducted on a sample population of university students and office staff. A 'Modified Rapid Upper Limb Assessment (RULA) for office work' technique was used for evaluation of OOS. The prevalence of CVS was surveyed incorporating a 10-point scoring system for each of its various symptoms. It was found that many were using standard keyboard and mouse without any ergonomic modifications. Around 50% of those with some low back pain did not have an adjustable backrest. Many users had higher RULA scores of the wrist and neck suggesting increased risk of developing OOS, which needed further intervention. Many (64%) were using refractive corrections and still had high scores of CVS commonly including eye fatigue, headache and burning sensation. The increase of CVS scores (suggesting more subjective symptoms) correlated with increase in computer usage spells. It was concluded that further onsite studies are needed, to follow up this survey to decrease the risks of developing CVS and OOS amongst young computer users.
A variational approach to multi-phase motion of gas, liquid and solid based on the level set method
NASA Astrophysics Data System (ADS)
Yokoi, Kensuke
2009-07-01
We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.
2011-11-01
RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica
ERIC Educational Resources Information Center
Tseng, Min-chen
2014-01-01
This study investigated the online reading performances and the level of visual fatigue from the perspectives of non-native speaking students (NNSs). Reading on a computer screen is more visually more demanding than reading printed text. Online reading requires frequent saccadic eye movements and imposes continuous focusing and alignment demand.…
Computer vision cracks the leaf code
Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas
2016-01-01
Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664
Modeling of pilot's visual behavior for low-level flight
NASA Astrophysics Data System (ADS)
Schulte, Axel; Onken, Reiner
1995-06-01
Developers of synthetic vision systems for low-level flight simulators deal with the problem to decide which features to incorporate in order to achieve most realistic training conditions. This paper supports an approach to this problem on the basis of modeling the pilot's visual behavior. This approach is founded upon the basic requirement that the pilot's mechanisms of visual perception should be identical in simulated and real low-level flight. Flight simulator experiments with pilots were conducted for knowledge acquisition. During the experiments video material of a real low-level flight mission containing different situations was displayed to the pilot who was acting under a realistic mission assignment in a laboratory environment. Pilot's eye movements could be measured during the replay. The visual mechanisms were divided into rule based strategies for visual navigation, based on the preflight planning process, as opposed to skill based processes. The paper results in a model of the pilot's planning strategy of a visual fixing routine as part of the navigation task. The model is a knowledge based system based upon the fuzzy evaluation of terrain features in order to determine the landmarks used by pilots. It can be shown that a computer implementation of the model selects those features, which were preferred by trained pilots, too.
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.
Orchard, Garrick; Jayawant, Ajinkya; Cohen, Gregory K; Thakor, Nitish
2015-01-01
Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.
NASA Technical Reports Server (NTRS)
Lewis, C. E., Jr.; Swaroop, R.; Mcmurty, T. C.; Blakeley, W. R.; Masters, R. L.
1973-01-01
Study of low-time general aviation pilots, who, in a series of spot landings, were suddenly deprived of binocular vision by patching either eye on the downwind leg of a standard, closed traffic pattern. Data collected during these landings were compared with control data from landings flown with normal vision during the same flight. The sequence of patching and the mix of control and monocular landings were randomized to minimize the effect of learning. No decrease in performance was observed during landings with vision restricted to one eye, in fact, performance improved. This observation is reported at a high level of confidence (p less than 0.001). These findings confirm the previous work of Lewis and Krier and have important implications with regard to aeromedical certification standards.
Vision-Based UAV Flight Control and Obstacle Avoidance
2006-01-01
denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion
Image annotation based on positive-negative instances learning
NASA Astrophysics Data System (ADS)
Zhang, Kai; Hu, Jiwei; Liu, Quan; Lou, Ping
2017-07-01
Automatic image annotation is now a tough task in computer vision, the main sense of this tech is to deal with managing the massive image on the Internet and assisting intelligent retrieval. This paper designs a new image annotation model based on visual bag of words, using the low level features like color and texture information as well as mid-level feature as SIFT, and mixture the pic2pic, label2pic and label2label correlation to measure the correlation degree of labels and images. We aim to prune the specific features for each single label and formalize the annotation task as a learning process base on Positive-Negative Instances Learning. Experiments are performed using the Corel5K Dataset, and provide a quite promising result when comparing with other existing methods.
Heterogeneous compute in computer vision: OpenCL in OpenCV
NASA Astrophysics Data System (ADS)
Gasparakis, Harris
2014-02-01
We explore the relevance of Heterogeneous System Architecture (HSA) in Computer Vision, both as a long term vision, and as a near term emerging reality via the recently ratified OpenCL 2.0 Khronos standard. After a brief review of OpenCL 1.2 and 2.0, including HSA features such as Shared Virtual Memory (SVM) and platform atomics, we identify what genres of Computer Vision workloads stand to benefit by leveraging those features, and we suggest a new mental framework that replaces GPU compute with hybrid HSA APU compute. As a case in point, we discuss, in some detail, popular object recognition algorithms (part-based models), emphasizing the interplay and concurrent collaboration between the GPU and CPU. We conclude by describing how OpenCL has been incorporated in OpenCV, a popular open source computer vision library, emphasizing recent work on the Transparent API, to appear in OpenCV 3.0, which unifies the native CPU and OpenCL execution paths under a single API, allowing the same code to execute either on CPU or on a OpenCL enabled device, without even recompiling.
Selection of Phototransduction Genes in Homo sapiens.
Christopher, Mark; Scheetz, Todd E; Mullins, Robert F; Abràmoff, Michael D
2013-08-13
We investigated the evidence of recent positive selection in the human phototransduction system at single nucleotide polymorphism (SNP) and gene level. SNP genotyping data from the International HapMap Project for European, Eastern Asian, and African populations was used to discover differences in haplotype length and allele frequency between these populations. Numeric selection metrics were computed for each SNP and aggregated into gene-level metrics to measure evidence of recent positive selection. The level of recent positive selection in phototransduction genes was evaluated and compared to a set of genes shown previously to be under recent selection, and a set of highly conserved genes as positive and negative controls, respectively. Six of 20 phototransduction genes evaluated had gene-level selection metrics above the 90th percentile: RGS9, GNB1, RHO, PDE6G, GNAT1, and SLC24A1. The selection signal across these genes was found to be of similar magnitude to the positive control genes and much greater than the negative control genes. There is evidence for selective pressure in the genes involved in retinal phototransduction, and traces of this selective pressure can be demonstrated using SNP-level and gene-level metrics of allelic variation. We hypothesize that the selective pressure on these genes was related to their role in low light vision and retinal adaptation to ambient light changes. Uncovering the underlying genetics of evolutionary adaptations in phototransduction not only allows greater understanding of vision and visual diseases, but also the development of patient-specific diagnostic and intervention strategies.
Chiang, Peggy Pei-Chia; Xie, Jing; Keeffe, Jill Elizabeth
2011-04-25
To identify the critical success factors (CSF) associated with coverage of low vision services. Data were collected from a survey distributed to Vision 2020 contacts, government, and non-government organizations (NGOs) in 195 countries. The Classification and Regression Tree Analysis (CART) was used to identify the critical success factors of low vision service coverage. Independent variables were sourced from the survey: policies, epidemiology, provision of services, equipment and infrastructure, barriers to services, human resources, and monitoring and evaluation. Socioeconomic and demographic independent variables: health expenditure, population statistics, development status, and human resources in general, were sourced from the World Health Organization (WHO), World Bank, and the United Nations (UN). The findings identified that having >50% of children obtaining devices when prescribed (χ(2) = 44; P < 0.000), multidisciplinary care (χ(2) = 14.54; P = 0.002), >3 rehabilitation workers per 10 million of population (χ(2) = 4.50; P = 0.034), higher percentage of population urbanized (χ(2) = 14.54; P = 0.002), a level of private investment (χ(2) = 14.55; P = 0.015), and being fully funded by government (χ(2) = 6.02; P = 0.014), are critical success factors associated with coverage of low vision services. This study identified the most important predictors for countries with better low vision coverage. The CART is a useful and suitable methodology in survey research and is a novel way to simplify a complex global public health issue in eye care.
The "Biologically-Inspired Computing" Column
NASA Technical Reports Server (NTRS)
Hinchey, Mike
2007-01-01
Self-managing systems, whether viewed from the perspective of Autonomic Computing, or from that of another initiative, offers a holistic vision for the development and evolution of biologically-inspired computer-based systems. It aims to bring new levels of automation and dependability to systems, while simultaneously hiding their complexity and reducing costs. A case can certainly be made that all computer-based systems should exhibit autonomic properties [6], and we envisage greater interest in, and uptake of, autonomic principles in future system development.
Reading and Comprehension Levels in a Sample of Urban, Low-Income Persons
ERIC Educational Resources Information Center
Delgado, Cheryl; Weitzel, Marilyn
2013-01-01
Objective: Because health literacy is related to healthcare outcomes, this study looked at reading and comprehension levels in a sample of urban, low-income persons. Design: This was a descriptive exploration of reading comprehension levels, controlled for medical problems that could impact on vision and therefore ability to read. Setting: Ninety…
Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J
2005-01-01
We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.
NASA Astrophysics Data System (ADS)
Jain, A. K.; Dorai, C.
Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.
A computer vision-based approach for structural displacement measurement
NASA Astrophysics Data System (ADS)
Ji, Yunfeng
2010-04-01
Along with the incessant advancement in optics, electronics and computer technologies during the last three decades, commercial digital video cameras have experienced a remarkable evolution, and can now be employed to measure complex motions of objects with sufficient accuracy, which render great assistance to structural displacement measurement in civil engineering. This paper proposes a computer vision-based approach for dynamic measurement of structures. One digital camera is used to capture image sequences of planar targets mounted on vibrating structures. The mathematical relationship between image plane and real space is established based on computer vision theory. Then, the structural dynamic displacement at the target locations can be quantified using point reconstruction rules. Compared with other tradition displacement measurement methods using sensors, such as accelerometers, linear-variable-differential-transducers (LVDTs) and global position system (GPS), the proposed approach gives the main advantages of great flexibility, a non-contact working mode and ease of increasing measurement points. To validate, four tests of sinusoidal motion of a point, free vibration of a cantilever beam, wind tunnel test of a cross-section bridge model, and field test of bridge displacement measurement, are performed. Results show that the proposed approach can attain excellent accuracy compared with the analytical ones or the measurements using conventional transducers, and proves to deliver an innovative and low cost solution to structural displacement measurement.
Can Humans Fly Action Understanding with Multiple Classes of Actors
2015-06-08
recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank
Active vision in satellite scene analysis
NASA Technical Reports Server (NTRS)
Naillon, Martine
1994-01-01
In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.
Fast and robust generation of feature maps for region-based visual attention.
Aziz, Muhammad Zaheer; Mertsching, Bärbel
2008-05-01
Visual attention is one of the important phenomena in biological vision which can be followed to achieve more efficiency, intelligence, and robustness in artificial vision systems. This paper investigates a region-based approach that performs pixel clustering prior to the processes of attention in contrast to late clustering as done by contemporary methods. The foundation steps of feature map construction for the region-based attention model are proposed here. The color contrast map is generated based upon the extended findings from the color theory, the symmetry map is constructed using a novel scanning-based method, and a new algorithm is proposed to compute a size contrast map as a formal feature channel. Eccentricity and orientation are computed using the moments of obtained regions and then saliency is evaluated using the rarity criteria. The efficient design of the proposed algorithms allows incorporating five feature channels while maintaining a processing rate of multiple frames per second. Another salient advantage over the existing techniques is the reusability of the salient regions in the high-level machine vision procedures due to preservation of their shapes and precise locations. The results indicate that the proposed model has the potential to efficiently integrate the phenomenon of attention into the main stream of machine vision and systems with restricted computing resources such as mobile robots can benefit from its advantages.
Computer vision in cell biology.
Danuser, Gaudenz
2011-11-23
Computer vision refers to the theory and implementation of artificial systems that extract information from images to understand their content. Although computers are widely used by cell biologists for visualization and measurement, interpretation of image content, i.e., the selection of events worth observing and the definition of what they mean in terms of cellular mechanisms, is mostly left to human intuition. This Essay attempts to outline roles computer vision may play and should play in image-based studies of cellular life. Copyright © 2011 Elsevier Inc. All rights reserved.
Nau, Amy; Bach, Michael; Fisher, Christopher
2013-01-01
We evaluated whether existing ultra-low vision tests are suitable for measuring outcomes using sensory substitution. The BrainPort is a vision assist device coupling a live video feed with an electrotactile tongue display, allowing a user to gain information about their surroundings. We enrolled 30 adult subjects (age range 22-74) divided into two groups. Our blind group included 24 subjects ( n = 16 males and n = 8 females, average age 50) with light perception or worse vision. Our control group consisted of six subjects ( n = 3 males, n = 3 females, average age 43) with healthy ocular status. All subjects performed 11 computer-based psychophysical tests from three programs: Basic Assessment of Light Motion, Basic Assessment of Grating Acuity, and the Freiburg Vision Test as well as a modified Tangent Screen. Assessments were performed at baseline and again using the BrainPort after 15 hours of training. Most tests could be used with the BrainPort. Mean success scores increased for all of our tests except contrast sensitivity. Increases were statistically significant for tests of light perception (8.27 ± 3.95 SE), time resolution (61.4% ± 3.14 SE), light localization (44.57% ± 3.58 SE), grating orientation (70.27% ± 4.64 SE), and white Tumbling E on a black background (2.49 logMAR ± 0.39 SE). Motion tests were limited by BrainPort resolution. Tactile-based sensory substitution devices are amenable to psychophysical assessments of vision, even though traditional visual pathways are circumvented. This study is one of many that will need to be undertaken to achieve a common outcomes infrastructure for the field of artificial vision.
Advances and Challenges in Super-Resolution
2004-03-15
resolution in video. In: Proc. European Conf on Computer Vision (ECCV), May 2002, pp. 331–336. N. Sochen, R . Kimmel, R . Malladi . 1998. A general...2004a). 48 Vol. 14, 47–57 (2004) distinguish between a generic down-sampling operation (or CCD decimation by a factor r ) and the sampling...factor r often depends on the number of available low-resolution frames, the computational limitations (exponential in r ), and the accuracy of motion
Randolph, Susan A
2017-07-01
With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.
2012-03-01
Lowe, David G. “Distinctive Image Features from Scale-Invariant Keypoints”. International Journal of Computer Vision, 2004. 13. Maybeck, Peter S...Fairfax Drive - 3rd Floor Arlington,VA 22203 Dr. Stefanie Tompkins ; (703)248–1540; Stefanie.Tompkins@darpa.mil DARPA Distribution A. Approved for Public
Technology for Work, Home, and Leisure. Tech Use Guide: Using Computer Technology.
ERIC Educational Resources Information Center
Williams, John M.
This guide provides a brief introduction to several types of technological devices useful to individuals with disabilities and illustrates how some individuals are applying technology in the workplace and at home. Devices described include communication aids, low-vision products, voice-activated systems, environmental controls, and aids for…
ERIC Educational Resources Information Center
Cox, Susan M.
1999-01-01
Explains how one New Orleans (LA) school is making a positive difference in a low-income community by serving as the community's focal point and providing the community access to a public library, computers, and a learning center. Highlights the development of the Greater New Orleans Education Foundation and its assessment process, designed to…
Real-time FPGA architectures for computer vision
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar
2000-03-01
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low level image processing. The FPGA-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on a dedicated VLSI to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real time performance are discussed. Some results are presented and discussed.
Ezekiel's vision: Visual evidence of Sterno-Etrussia geomagnetic excursion?
NASA Astrophysics Data System (ADS)
Raspopov, Oleg M.; Dergachev, Valentin A.; Goos'kova, Elena G.
In the Eos article,“Ezekiel and the Northern Lights: Biblical Aurora Seems Plausible” (16 April 2002), Siscoe et al. presented arguments showing that coronal auroras can occur at low latitudes under the condition of increased geomagnetic dipole field strength. From this standpoint, they give an interpretation of the “reported” Ezekiel's vision (the Bible's Book of Ezekiel in the Old Testament). The site of the Ezekiel's vision was about 100 km south of Babylon (latitude ˜32° N, longitude ˜5°E), and the date of the vision was around 593 B.C. Auroral specialists believe that Ezekiel's vision was inspired by a very strong magnetic storm accompanied by coronal auroras at low latitudes. However, as justly noted by Siscoe et al. [2002],to adopt this interpretation, several questions should be answered. Can auroras be seen at the latitude where Ezekiel reportedly was? More important, can they reach a coronal stage of development, which is what the vision requires? Was the tilt of the dipole axis favorable? Was the general level of solar activity favorable? The principal question is, no doubt, the second one.
Reconfigurable vision system for real-time applications
NASA Astrophysics Data System (ADS)
Torres-Huitzil, Cesar; Arias-Estrada, Miguel
2002-03-01
Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.
Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft
2017-06-01
International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing
Biological Basis For Computer Vision: Some Perspectives
NASA Astrophysics Data System (ADS)
Gupta, Madan M.
1990-03-01
Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.
Robot acting on moving bodies (RAMBO): Preliminary results
NASA Technical Reports Server (NTRS)
Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madju; Harwood, David
1989-01-01
A robot system called RAMBO is being developed. It is equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a moving object. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations nearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enchancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows the use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using parametric cubic splines between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.
Siril, Nathanael; Kiwara, Angwara; Simba, Daud
2013-06-01
Human resource for health (HRH) is an essential building block for effective and efficient health care system. In Tanzania this component is faced by many challenges which in synergy with others make the health care system inefficient. In vision 2025 the country recognizes the importance of the health care sector in attaining quality livelihood for its citizens. The vision is in its 13th year since its launch. Given the central role of HRH in attainment of this vision, how the HRH is trained and deployed deserves a deeper understanding. To analyze the factors affecting training and deployment process of graduate level HRH of three core cadres; Medical Doctors, Doctor of Dental Surgery and Bachelor of Pharmacy towards realization of development vision 2025. Explorative study design in five training institutions for health and Ministry of Health and Social Welfare (MoHSW) headquarters utilizing in-depth interviews, observations and review of available documents methodology. The training Institutions which are cornerstone for HRH training are understaffed, underfunded (donor dependent), have low admitting capacities and lack co-ordination with other key stakeholders dealing with health. The deployment of graduate level HRH is affected by; limited budget, decision on deployment handled by another ministry rather than MoHSW, competition between health care sector and other sectors and lack of co-ordination between employer, trainers and other key health care sector stakeholders. Awareness on vision 2025 is low in the training institutions. For the vision 2025 health care sector goals to be realized well devised strategies on raising its awareness in the training institutions is recommended. Quality livelihood as stated in vision 2025 will be a forgotten dream if the challenges facing the training and deployment of graduate level HRH will not be addressed timely. It is the authors' view that reduction of donor dependency syndrome, extension of retirement age for academic Staffs in the training institutions for health and synergizing the training and deployment of the graduate level HRH can be among the initial strategies towards addressing these challenges.
2006-07-27
unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The goal of this project was to develop analytical and computational tools to make vision a Viable sensor for...vision.ucla. edu July 27, 2006 Abstract The goal of this project was to develop analytical and computational tools to make vision a viable sensor for the ... sensors . We have proposed the framework of stereoscopic segmentation where multiple images of the same obejcts were jointly processed to extract geometry
Gesture therapy: a vision-based system for upper extremity stroke rehabilitation.
Sucar, L; Luis, Roger; Leder, Ron; Hernandez, Jorge; Sanchez, Israel
2010-01-01
Stroke is the main cause of motor and cognitive disabilities requiring therapy in the world. Therefor it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. We have developed a low-cost vision-based system that allows stroke survivors to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a virtual environment for facilitating repetitive movement training, with computer vision algorithms that track the hand of a patient, using an inexpensive camera and a personal computer. This system, called Gesture Therapy, includes a gripper with a pressure sensor to include hand and finger rehabilitation; and it tracks the head of the patient to detect and avoid trunk compensation. It has been evaluated in a controlled clinical trial at the National Institute for Neurology and Neurosurgery in Mexico City, comparing it with conventional occupational therapy. In this paper we describe the latest version of the Gesture Therapy System and summarize the results of the clinical trail.
Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke
NASA Astrophysics Data System (ADS)
Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro
Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.
A rotorcraft flight database for validation of vision-based ranging algorithms
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1992-01-01
A helicopter flight test experiment was conducted at the NASA Ames Research Center to obtain a database consisting of video imagery and accurate measurements of camera motion, camera calibration parameters, and true range information. The database was developed to allow verification of monocular passive range estimation algorithms for use in the autonomous navigation of rotorcraft during low altitude flight. The helicopter flight experiment is briefly described. Four data sets representative of the different helicopter maneuvers and the visual scenery encountered during the flight test are presented. These data sets will be made available to researchers in the computer vision community.
Bag-of-visual-ngrams for histopathology image classification
NASA Astrophysics Data System (ADS)
López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper describes an extension of the Bag-of-Visual-Words (BoVW) representation for image categorization (IC) of histophatology images. This representation is one of the most used approaches in several high-level computer vision tasks. However, the BoVW representation has an important limitation: the disregarding of spatial information among visual words. This information may be useful to capture discriminative visual-patterns in specific computer vision tasks. In order to overcome this problem we propose the use of visual n-grams. N-grams based-representations are very popular in the field of natural language processing (NLP), in particular within text mining and information retrieval. We propose building a codebook of n-grams and then representing images by histograms of visual n-grams. We evaluate our proposal in the challenging task of classifying histopathology images. The novelty of our proposal lies in the fact that we use n-grams as attributes for a classification model (together with visual-words, i.e., 1-grams). This is common practice within NLP, although, to the best of our knowledge, this idea has not been explored yet within computer vision. We report experimental results in a database of histopathology images where our proposed method outperforms the traditional BoVWs formulation.
Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight
NASA Technical Reports Server (NTRS)
Suorsa, Raymond; Sridhar, Banavar
1991-01-01
A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.
Book4All: A Tool to Make an e-Book More Accessible to Students with Vision/Visual-Impairments
NASA Astrophysics Data System (ADS)
Calabrò, Antonello; Contini, Elia; Leporini, Barbara
Empowering people who are blind or otherwise visually impaired includes ensuring that products and electronic materials incorporate a broad range of accessibility features and work well with screen readers and other assistive technology devices. This is particularly important for students with vision impairments. Unfortunately, authors and publishers often do not include specific criteria when preparing the contents. Consequently, e-books can be inadequate for blind and low vision users, especially for students. In this paper we describe a semi-automatic tool developed to support operators who adapt e-documents for visually impaired students. The proposed tool can be used to convert a PDF e-book into a more suitable accessible and usable format readable on desktop computer or on mobile devices.
Emergence of a rehabilitation medicine model for low vision service delivery, policy, and funding.
Stelmack, Joan
2005-05-01
A rehabilitation medicine model for low vision rehabilitation is emerging. There have been many challenges to reaching consensus on the roles of each discipline (optometry, ophthalmology, occupational therapy, and vision rehabilitation professionals) in the service delivery model and finding a place in the reimbursement system for all the providers. The history of low vision, legislation associated with Centers for Medicare and Medicaid Services coverage for vision rehabilitation, and research on the effectiveness of low vision service delivery are reviewed. Vision rehabilitation is now covered by Medicare under Physical Medicine and Rehabilitation codes by some Medicare carriers, yet reimbursement is not available for low vision devices or refraction. Also, the role of vision rehabilitation professionals (rehabilitation teachers, orientation and mobility specialists, and low vision therapists) in the model needs to be determined. In a recent systematic review of the scientific literature on the effectiveness of low vision services contracted by the Agency for Health Care Quality Research, no clinical trials were found. The literature consists primarily of longitudinal case studies, which provide weak support for third-party funding for vision rehabilitative services. Providers need to reach consensus on medical necessity, treatment plans, and protocols. Research on low vision outcomes is needed to develop an evidence base to guide clinical practice, policy, and funding decisions.
Helping blind and partially sighted people to read: the effectiveness of low vision aids
Margrain, T.
2000-01-01
AIMS—To substantiate the claim that low vision aids reduce the degree of disability associated with visual impairment. METHODS—An observational study of vision, ocular pathology, age, sex, and reading ability in new referrals to a low vision clinic. Reading ability was assessed both with the patients' own spectacles and with an appropriate low vision aid. RESULTS—The reading performance and biographical characteristics of new referrals to a low vision clinic were recorded. Data were collected for 168 people over a 6 month period. Upon arrival at the clinic the mean functional visual acuity equated to 6/36 and 77% of patients were unable to read newsprint (N8). After a low vision assessment and provision of a suitable low vision aid 88% of new patients were able to read N8 or smaller text. CONCLUSIONS—The degree of visual impairment observed in new referrals to a low vision clinic is sufficient to prevent the majority from performing many daily tasks. Low vision aids are an effective means of providing visual rehabilitation, helping almost nine out of 10 patients with impaired vision to read. PMID:10906105
Computer vision techniques for rotorcraft low-altitude flight
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Cheng, Victor H. L.
1988-01-01
A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.
A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.
Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres
2016-05-28
Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.
Scene and human face recognition in the central vision of patients with glaucoma
Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole
2018-01-01
Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
Machine learning and computer vision approaches for phenotypic profiling.
Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J
2017-01-02
With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.
Machine learning and computer vision approaches for phenotypic profiling
Morris, Quaid
2017-01-01
With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. PMID:27940887
Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming
Philip A. Araman
1990-01-01
This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...
Approximate labeling via graph cuts based on linear programming.
Komodakis, Nikos; Tziritas, Georgios
2007-08-01
A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov Random Fields (MRFs) that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of state-of-the-art techniques like the \\alpha-expansion algorithm, which is included merely as a special case. Moreover, contrary to \\alpha-expansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, for example, even for MRFs with nonmetric potentials. In addition, they are capable of providing per-instance suboptimality bounds in all occasions, including discrete MRFs with an arbitrary potential function. These bounds prove to be very tight in practice (that is, very close to 1), which means that the resulting solutions are almost optimal. Our algorithms' effectiveness is demonstrated by presenting experimental results on a variety of low-level vision tasks, such as stereo matching, image restoration, image completion, and optical flow estimation, as well as on synthetic problems.
2015-12-04
from back-office big - data analytics to fieldable hot-spot systems providing storage-processing-communication services for off- grid sensors. Speed...and power efficiency are the key metrics. Current state-of-the art approaches for big - data aim toward scaling out to many computers to meet...pursued within Lincoln Laboratory as well as external sponsors. Our vision is to bring new capabilities in big - data and internet-of-things applications
Ultraviolet vision may be widespread in bats
Gorresen, P. Marcos; Cryan, Paul; Dalton, David C.; Wolf, Sandy; Bonaccorso, Frank
2015-01-01
Insectivorous bats are well known for their abilities to find and pursue flying insect prey at close range using echolocation, but they also rely heavily on vision. For example, at night bats use vision to orient across landscapes, avoid large obstacles, and locate roosts. Although lacking sharp visual acuity, the eyes of bats evolved to function at very low levels of illumination. Recent evidence based on genetics, immunohistochemistry, and laboratory behavioral trials indicated that many bats can see ultraviolet light (UV), at least at illumination levels similar to or brighter than those before twilight. Despite this growing evidence for potentially widespread UV vision in bats, the prevalence of UV vision among bats remains unknown and has not been studied outside of the laboratory. We used a Y-maze to test whether wild-caught bats could see reflected UV light and whether such UV vision functions at the dim lighting conditions typically experienced by night-flying bats. Seven insectivorous species of bats, representing five genera and three families, showed a statistically significant ‘escape-toward-the-light’ behavior when placed in the Y-maze. Our results provide compelling evidence of widespread dim-light UV vision in bats.
Face adaptation improves gender discrimination.
Yang, Hua; Shen, Jianhong; Chen, Juan; Fang, Fang
2011-01-01
Adaptation to a visual pattern can alter the sensitivities of neuronal populations encoding the pattern. However, the functional roles of adaptation, especially in high-level vision, are still equivocal. In the present study, we performed three experiments to investigate if face gender adaptation could affect gender discrimination. Experiments 1 and 2 revealed that adapting to a male/female face could selectively enhance discrimination for male/female faces. Experiment 3 showed that the discrimination enhancement induced by face adaptation could transfer across a substantial change in three-dimensional face viewpoint. These results provide further evidence suggesting that, similar to low-level vision, adaptation in high-level vision could calibrate the visual system to current inputs of complex shapes (i.e. face) and improve discrimination at the adapted characteristic. Copyright © 2010 Elsevier Ltd. All rights reserved.
Ganesh, Suma; Sethi, Sumita; Srivastav, Sonia; Chaudhary, Amrita; Arora, Priyanka
2013-09-01
To evaluate the impact of low vision rehabilitation on functional vision of children with visual impairment. The LV Prasad-Functional Vision Questionnaire, designed specifically to measure functional performance of visually impaired children of developing countries, was used to assess the level of difficulty in performing various tasks pre and post visual rehabilitation in children with documented visual impairment. Chi-square test was used to assess the impact of rehabilitation intervention on functional vision performance; a P < 0.05 was considered significant. LogMAR visual acuity prior to the introduction of low vision devices (LVDs) was 0.90 ± 0.05 for distance and for near it was 0.61 ± 0.05. After the intervention, the acuities improved significantly for distance (0.2 ± 0.27; P < 0.0001) and near (0.42 ± 0.17; P = 0.001). The most common reported difficulties were related to their academic activities like copying from the blackboard (80%), reading textbook at arm's length (77.2%), and writing along a straight line (77.2%). Absolute raw score of disability pre-LVD was 15.05 which improved to 7.58 post-LVD. An improvement in functional vision post visual rehabilitation was especially found in those activities related to their studying lifestyle like copying from the blackboard (P < 0.0001), reading textbook at arm's length (P < 0.0001), and writing along a straight line (P = 0.003). In our study group, there was a significant improvement in functional vision post visual rehabilitation, especially with those activities which are related to their academic output. It is important for these children to have an early visual rehabilitation to decrease the impairment associated with these decreased visual output and to enhance their learning abilities.
Recognizing Materials using Perceptually Inspired Features
Sharan, Lavanya; Liu, Ce; Rosenholtz, Ruth; Adelson, Edward H.
2013-01-01
Our world consists not only of objects and scenes but also of materials of various kinds. Being able to recognize the materials that surround us (e.g., plastic, glass, concrete) is important for humans as well as for computer vision systems. Unfortunately, materials have received little attention in the visual recognition literature, and very few computer vision systems have been designed specifically to recognize materials. In this paper, we present a system for recognizing material categories from single images. We propose a set of low and mid-level image features that are based on studies of human material recognition, and we combine these features using an SVM classifier. Our system outperforms a state-of-the-art system [Varma and Zisserman, 2009] on a challenging database of real-world material categories [Sharan et al., 2009]. When the performance of our system is compared directly to that of human observers, humans outperform our system quite easily. However, when we account for the local nature of our image features and the surface properties they measure (e.g., color, texture, local shape), our system rivals human performance. We suggest that future progress in material recognition will come from: (1) a deeper understanding of the role of non-local surface properties (e.g., extended highlights, object identity); and (2) efforts to model such non-local surface properties in images. PMID:23914070
Machine vision for real time orbital operations
NASA Technical Reports Server (NTRS)
Vinz, Frank L.
1988-01-01
Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).
Invariant Geometric Evolutions of Surfaces and Volumetric Smoothing
1994-04-15
1991. [40] D. G. Lowe, "Organization of smooth image curves at multiple scales," International Journal of Computer Vision 3, pp. 119-130, 1989. [41] E ... Lutwak , "On some affine isoperimetric inequalities," J. Differential Geometry 23, pp. 1-13, 1986. [42] F. Mokhatarian and A. Mackworth, "A theory of
Assistive technology applied to education of students with visual impairment.
Alves, Cássia Cristiane de Freitas; Monteiro, Gelse Beatriz Martins; Rabello, Suzana; Gasparetto, Maria Elisabete Rodrigues Freire; de Carvalho, Keila Monteiro
2009-08-01
Verify the application of assistive technology, especially information technology in the education of blind and low-vision students from the perceptions of their teachers. Descriptive survey study in public schools in three municipalities of the state of São Paulo, Brazil. The sample comprised 134 teachers. According to the teachers' opinions, there are differences in the specificities and applicability of assistive technology for blind and low-vision students, for whom specific computer programs are important. Information technology enhances reading and writing skills, as well as communication with the world on an equal basis, thereby improving quality of life and facilitating the learning process. The main reason for not using information technology is the lack of planning courses. The main requirements for the use of information technology in schools are enough computers for all students, advisers to help teachers, and pedagogical support. Assistive technology is applied to education of students with visual impairment; however, teachers indicate the need for infrastructure and pedagogical support. Information technology is an important tool in the inclusion process and can promote independence and autonomy of students with visual impairment.
Shoji, Takuhei; Sakurai, Yutaka; Sato, Hiroki; Chihara, Etsuo; Ishida, Masahiro; Omae, Kazuyuki
2010-06-01
To investigate associations between blood low-density lipoprotein cholesterol (LDL-C) levels and the prevalence of acquired color vision impairment (ACVI) in middle-aged Japanese men. Participants in this cross-sectional study underwent color vision testing, ophthalmic examination, a standardized interview and examination of venous blood samples. Ishihara plates, a Lanthony 15-hue desaturated panel, and Standard pseudoisochromatic Plates part 2 were used to examine color vision ability. The Farnsworth-Munsell 100-hue test was performed to define ACVI. Smoking status and alcohol intake were recorded during the interview. We performed logistic regression analysis adjusted for age, LDL-C level, systemic hypertension, diabetes, cataract, glaucoma, overweight, smoking status, and alcohol intake. Adjusted odds ratios for four LDL-C levels were calculated. A total of 1042 men were enrolled, 872 participants were eligible for the study, and 31 subjects were diagnosed with ACVI. As compared to the lowest LDL-C category level (<100 mg/dl), the crude OR of ACVI was 3.85 (95% confidence interval [CI], 1.24-11.00) for the 2nd highest category (130-159 mg/dl), and 4.84 (95% CI, 1.42-16.43) for the highest level (>or=160 mg/dl). The multiple-adjusted ORs were 2.91 (95% CI, 0.87-9.70) for the 2nd highest category and 3.81 (95% CI, 1.03-14.05) for the highest level. Tests for trend were significant (P<0.05) in both analyses. These findings suggested that the prevalence of ACVI is higher among middle-aged Japanese men with elevated LDL-C levels. These changes might be related to deteriorated neurologic function associated with lipid metabolite abnormalities. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.
Job-shop scheduling applied to computer vision
NASA Astrophysics Data System (ADS)
Sebastian y Zuniga, Jose M.; Torres-Medina, Fernando; Aracil, Rafael; Reinoso, Oscar; Jimenez, Luis M.; Garcia, David
1997-09-01
This paper presents a method for minimizing the total elapsed time spent by n tasks running on m differents processors working in parallel. The developed algorithm not only minimizes the total elapsed time but also reduces the idle time and waiting time of in-process tasks. This condition is very important in some applications of computer vision in which the time to finish the total process is particularly critical -- quality control in industrial inspection, real- time computer vision, guided robots. The scheduling algorithm is based on the use of two matrices, obtained from the precedence relationships between tasks, and the data obtained from the two matrices. The developed scheduling algorithm has been tested in one application of quality control using computer vision. The results obtained have been satisfactory in the application of different image processing algorithms.
NASA Astrophysics Data System (ADS)
Astafiev, A.; Orlov, A.; Privezencev, D.
2018-01-01
The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.
Low-Cost Space Hardware and Software
NASA Technical Reports Server (NTRS)
Shea, Bradley Franklin
2013-01-01
The goal of this project is to demonstrate and support the overall vision of NASA's Rocket University (RocketU) through the design of an electrical power system (EPS) monitor for implementation on RUBICS (Rocket University Broad Initiatives CubeSat), through the support for the CHREC (Center for High-Performance Reconfigurable Computing) Space Processor, and through FPGA (Field Programmable Gate Array) design. RocketU will continue to provide low-cost innovations even with continuous cuts to the budget.
Tsotsos, John K.
2017-01-01
Much has been written about how the biological brain might represent and process visual information, and how this might inspire and inform machine vision systems. Indeed, tremendous progress has been made, and especially during the last decade in the latter area. However, a key question seems too often, if not mostly, be ignored. This question is simply: do proposed solutions scale with the reality of the brain's resources? This scaling question applies equally to brain and to machine solutions. A number of papers have examined the inherent computational difficulty of visual information processing using theoretical and empirical methods. The main goal of this activity had three components: to understand the deep nature of the computational problem of visual information processing; to discover how well the computational difficulty of vision matches to the fixed resources of biological seeing systems; and, to abstract from the matching exercise the key principles that lead to the observed characteristics of biological visual performance. This set of components was termed complexity level analysis in Tsotsos (1987) and was proposed as an important complement to Marr's three levels of analysis. This paper revisits that work with the advantage that decades of hindsight can provide. PMID:28848458
Tsotsos, John K
2017-01-01
Much has been written about how the biological brain might represent and process visual information, and how this might inspire and inform machine vision systems. Indeed, tremendous progress has been made, and especially during the last decade in the latter area. However, a key question seems too often, if not mostly, be ignored. This question is simply: do proposed solutions scale with the reality of the brain's resources? This scaling question applies equally to brain and to machine solutions. A number of papers have examined the inherent computational difficulty of visual information processing using theoretical and empirical methods. The main goal of this activity had three components: to understand the deep nature of the computational problem of visual information processing; to discover how well the computational difficulty of vision matches to the fixed resources of biological seeing systems; and, to abstract from the matching exercise the key principles that lead to the observed characteristics of biological visual performance. This set of components was termed complexity level analysis in Tsotsos (1987) and was proposed as an important complement to Marr's three levels of analysis. This paper revisits that work with the advantage that decades of hindsight can provide.
Z Alotaibi, Abdullah
2015-10-20
Vision is the ability of seeing with a definite understanding of features, color and contrast, and to distinguish between objects visually. In the year 1999, the World Health Organization (WHO) and the International Agency for the Prevention of Blindness formulated a worldwide project for the eradication of preventable loss of sight with the subject of "Vision 2020: the Right to Sight". This global program aims to eradicate preventable loss of sight by the year 2020. This study was conducted to determine the main causes of low vision in Saudi Arabia and also to assess their visual improvement after using low vision aids (LVD).The study is a retrospective study and was conducted in low vision clinic at Eye World Medical Complex in Riyadh, Saudi Arabia. The file medical record of 280 patients attending low vision clinics from February 2008 to June 2010 was included. A data sheet was filled which include: age, gender, cause of low vision, unassisted visual acuity for long distances and short distances, low vision devices needed for long distances and short distances that provides best visual acuity. The result shows that the main cause of low vision was Optic atrophy (28.9%). Retinitis pigmentosa was the second cause of low vision, accounting for 73 patients (26%) followed by Diabetic retinopathy and Macular degeneration with 44 patients (15.7%) and 16 patients (5.7%) respectively. Inter family marriage could be one of the main causes of low vision. Public awareness should be embarked on for enlightenment on ocular diseases result in consanguineous marriage. Also, it is an important issue to start establishing low vision clinics in order to improve the situation.
Texture and art with deep neural networks.
Gatys, Leon A; Ecker, Alexander S; Bethge, Matthias
2017-10-01
Although the study of biological vision and computer vision attempt to understand powerful visual information processing from different angles, they have a long history of informing each other. Recent advances in texture synthesis that were motivated by visual neuroscience have led to a substantial advance in image synthesis and manipulation in computer vision using convolutional neural networks (CNNs). Here, we review these recent advances and discuss how they can in turn inspire new research in visual perception and computational neuroscience. Copyright © 2017. Published by Elsevier Ltd.
Towards Guided Underwater Survey Using Light Visual Odometry
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.
2017-02-01
A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.
Motmot, an open-source toolkit for realtime video acquisition and analysis.
Straw, Andrew D; Dickinson, Michael H
2009-07-22
Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.
ERIC Educational Resources Information Center
Marinoff, Rebecca; Heilberger, Michael H.
2017-01-01
A model Center of Excellence in Low Vision and Vision Rehabilitation was created in a health care setting in China utilizing an inter-institutional relationship with a United States optometric institution. Accomplishments of, limitations to, and stimuli to the provision of low vision and vision rehabilitation services are shared.
Impact of computer use on children's vision.
Kozeis, N
2009-10-01
Today, millions of children use computers on a daily basis. Extensive viewing of the computer screen can lead to eye discomfort, fatigue, blurred vision and headaches, dry eyes and other symptoms of eyestrain. These symptoms may be caused by poor lighting, glare, an improper work station set-up, vision problems of which the person was not previously aware, or a combination of these factors. Children can experience many of the same symptoms related to computer use as adults. However, some unique aspects of how children use computers may make them more susceptible than adults to the development of these problems. In this study, the most common eye symptoms related to computer use in childhood, the possible causes and ways to avoid them are reviewed.
A Multi-Disciplinary Approach to Remote Sensing through Low-Cost UAVs.
Calvario, Gabriela; Sierra, Basilio; Alarcón, Teresa E; Hernandez, Carmen; Dalmau, Oscar
2017-06-16
The use of Unmanned Aerial Vehicles (UAVs) based on remote sensing has generated low cost monitoring, since the data can be acquired quickly and easily. This paper reports the experience related to agave crop analysis with a low cost UAV. The data were processed by traditional photogrammetric flow and data extraction techniques were applied to extract new layers and separate the agave plants from weeds and other elements of the environment. Our proposal combines elements of photogrammetry, computer vision, data mining, geomatics and computer science. This fusion leads to very interesting results in agave control. This paper aims to demonstrate the potential of UAV monitoring in agave crops and the importance of information processing with reliable data flow.
A Multi-Disciplinary Approach to Remote Sensing through Low-Cost UAVs
Calvario, Gabriela; Sierra, Basilio; Alarcón, Teresa E.; Hernandez, Carmen; Dalmau, Oscar
2017-01-01
The use of Unmanned Aerial Vehicles (UAVs) based on remote sensing has generated low cost monitoring, since the data can be acquired quickly and easily. This paper reports the experience related to agave crop analysis with a low cost UAV. The data were processed by traditional photogrammetric flow and data extraction techniques were applied to extract new layers and separate the agave plants from weeds and other elements of the environment. Our proposal combines elements of photogrammetry, computer vision, data mining, geomatics and computer science. This fusion leads to very interesting results in agave control. This paper aims to demonstrate the potential of UAV monitoring in agave crops and the importance of information processing with reliable data flow. PMID:28621740
ERIC Educational Resources Information Center
Joshi, Mahesh R.; Yamagata, Yoshitaka; Akura, Junsuke; Shakya, Suraj
2008-01-01
In Nepal, children with low vision attend specialized schools for students who are totally blind and are treated as if they were totally blind. This study identified children with low vision and provided low vision devices to them. Of the 22% of the students in the school who had low vision, 78.5% benefited from the devices. Proper devices and…
Fighting detection using interaction energy force
NASA Astrophysics Data System (ADS)
Wateosot, Chonthisa; Suvonvorn, Nikom
2017-02-01
Fighting detection is an important issue in security aimed to prevent criminal or undesirable events in public places. Many researches on computer vision techniques have studied to detect the specific event in crowded scenes. In this paper we focus on fighting detection using social-based Interaction Energy Force (IEF). The method uses low level features without object extraction and tracking. The interaction force is modeled using the magnitude and direction of optical flows. A fighting factor is developed under this model to detect fighting events using thresholding method. An energy map of interaction force is also presented to identify the corresponding events. The evaluation is performed using NUSHGA and BEHAVE datasets. The results show the efficiency with high accuracy regardless of various conditions.
Synthetic vision in the cockpit: 3D systems for general aviation
NASA Astrophysics Data System (ADS)
Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth
2001-08-01
Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
Evaluating focused ion beam patterning for position-controlled nanowire growth using computer vision
NASA Astrophysics Data System (ADS)
Mosberg, A. B.; Myklebost, S.; Ren, D.; Weman, H.; Fimland, B. O.; van Helvoort, A. T. J.
2017-09-01
To efficiently evaluate the novel approach of focused ion beam (FIB) direct patterning of substrates for nanowire growth, a reference matrix of hole arrays has been used to study the effect of ion fluence and hole diameter on nanowire growth. Self-catalyzed GaAsSb nanowires were grown using molecular beam epitaxy and studied by scanning electron microscopy (SEM). To ensure an objective analysis, SEM images were analyzed with computer vision to automatically identify nanowires and characterize each array. It is shown that FIB milling parameters can be used to control the nanowire growth. Lower ion fluence and smaller diameter holes result in a higher yield (up to 83%) of single vertical nanowires, while higher fluence and hole diameter exhibit a regime of multiple nanowires. The catalyst size distribution and placement uniformity of vertical nanowires is best for low-value parameter combinations, indicating how to improve the FIB parameters for positioned-controlled nanowire growth.
Operational Assessment of Color Vision
2016-06-20
evaluated in this study. 15. SUBJECT TERMS Color vision, aviation, cone contrast test, Colour Assessment & Diagnosis , color Dx, OBVA 16. SECURITY...symbologies are frequently used to aid or direct critical activities such as aircraft landing approaches or railroad right-of-way designations...computer-generated display systems have facilitated the development of computer-based, automated tests of color vision [14,15]. The United Kingdom’s
Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffith, Douglas; Greitzer, Frank L.
We re-address the vision of human-computer symbiosis expressed by J. C. R. Licklider nearly a half-century ago, when he wrote: “The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” (Licklider, 1960). Unfortunately, little progress was made toward this vision over four decades following Licklider’s challenge, despite significant advancements in the fields of human factors and computer science. Licklider’s vision wasmore » largely forgotten. However, recent advances in information science and technology, psychology, and neuroscience have rekindled the potential of making the Licklider’s vision a reality. This paper provides a historical context for and updates the vision, and it argues that such a vision is needed as a unifying framework for advancing IS&T.« less
Low Vision Rehabilitation for Adult African Americans in Two Settings.
Draper, Erin M; Feng, Rui; Appel, Sarah D; Graboyes, Marcy; Engle, Erin; Ciner, Elise B; Ellenberg, Jonas H; Stambolian, Dwight
2016-07-01
The Vision Rehabilitation for African Americans with Central Vision Impairment (VISRAC) study is a demonstration project evaluating how modifications in vision rehabilitation can improve the use of functional vision. Fifty-five African Americans 40 years of age and older with central vision impairment were randomly assigned to receive either clinic-based (CB) or home-based (HB) low vision rehabilitation services. Forty-eight subjects completed the study. The primary outcome was the change in functional vision in activities of daily living, as assessed with the Veteran's Administration Low-Vision Visual Function Questionnaire (VFQ-48). This included scores for overall visual ability and visual ability domains (reading, mobility, visual information processing, and visual motor skills). Each score was normalized into logit estimates by Rasch analysis. Linear regression models were used to compare the difference in the total score and each domain score between the two intervention groups. The significance level for each comparison was set at 0.05. Both CB and HB groups showed significant improvement in overall visual ability at the final visit compared with baseline. The CB group showed greater improvement than the HB group (mean of 1.28 vs. 0.87 logits change), though the group difference is not significant (p = 0.057). The CB group visual motor skills score showed significant improvement over the HB group score (mean of 3.30 vs. 1.34 logits change, p = 0.044). The differences in improvement of the reading and visual information processing scores were not significant (p = 0.054 and p = 0.509) between groups. Neither group had significant improvement in the mobility score, which was not part of the rehabilitation program. Vision rehabilitation is effective for this study population regardless of location. Possible reasons why the CB group performed better than the HB group include a number of psychosocial factors as well as the more standardized distraction-free work environment within the clinic setting.
Chow, Alan Y.; Bittner, Ava K.; Pardue, Machelle T.
2010-01-01
Purpose: In a published pilot study, a light-activated microphotodiode-array chip, the artificial silicon retina (ASR), was implanted subretinally in 6 retinitis pigmentosa (RP) patients for up to 18 months. The ASR electrically induced retinal neurotrophic rescue of visual acuity, contrast, and color perception and raised several questions: (1) Would neurotrophic effects develop and persist in additionally implanted RP patients? (2) Could vision in these patients be reliably assessed? (3) Would the ASR be tolerated and function for extended periods? Methods: Four additional RP patients were implanted and observed along with the 6 pilot patients. Of the 10 patients, 6 had vision levels that allowed for more standardized testing and were followed up for 7+ years utilizing ETDRS charts and a 4-alternative forced choice (AFC) Chow grating acuity test (CGAT). A 10-AFC Chow color test (CCT) extended the range of color vision testing. Histologic examination of the eyes of one patient, who died of an unrelated event, was performed. Results: The ASR was well tolerated, and improvement and/or slowing of vision loss occurred in all 6 patients. CGAT extended low vision acuity testing by logMAR 0.6. CCT expanded the range of color vision testing and correlated well with PV-16 (r = 0.77). An ASR recovered from a patient 5 years after implantation showed minor disruption and excellent electrical function. Conclusion: ASR-implanted RP patients experienced prolonged neurotrophic rescue of vision. CGAT and CCT extended the range of acuity and color vision testing in low vision patients. ASR implantation may improve and prolong vision in RP patients. PMID:21212852
Computer Vision Syndrome: Implications for the Occupational Health Nurse.
Lurati, Ann Regina
2018-02-01
Computers and other digital devices are commonly used both in the workplace and during leisure time. Computer vision syndrome (CVS) is a new health-related condition that negatively affects workers. This article reviews the pathology of and interventions for CVS with implications for the occupational health nurse.
Evaluation of Available Software for Reconstruction of a Structure from its Imagery
2017-04-01
Math . 2, 164–168. Lowe, D. G. (1999) Object recognition from local scale-invariant features, in Proc. Int. Conf. Computer Vision, Vol. 2, pp. 1150–1157...Marquardt, D. (1963) An algorithm for least-squares estimation of nonlinear parameters, SIAM J. Appl. Math . 11(2), 431–441. UNCLASSIFIED 11 DST-Group–TR
Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision Techniques
ERIC Educational Resources Information Center
Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández
2013-01-01
Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, "Eyegrade," a system for automatic grading of multiple…
Infrared Cephalic-Vein to Assist Blood Extraction Tasks: Automatic Projection and Recognition
NASA Astrophysics Data System (ADS)
Lagüela, S.; Gesto, M.; Riveiro, B.; González-Aguilera, D.
2017-05-01
Thermal infrared band is not commonly used in photogrammetric and computer vision algorithms, mainly due to the low spatial resolution of this type of imagery. However, this band captures sub-superficial information, increasing the capabilities of visible bands regarding applications. This fact is especially important in biomedicine and biometrics, allowing the geometric characterization of interior organs and pathologies with photogrammetric principles, as well as the automatic identification and labelling using computer vision algorithms. This paper presents advances of close-range photogrammetry and computer vision applied to thermal infrared imagery, with the final application of Augmented Reality in order to widen its application in the biomedical field. In this case, the thermal infrared image of the arm is acquired and simultaneously projected on the arm, together with the identification label of the cephalic-vein. This way, blood analysts are assisted in finding the vein for blood extraction, especially in those cases where the identification by the human eye is a complex task. Vein recognition is performed based on the Gaussian temperature distribution in the area of the vein, while the calibration between projector and thermographic camera is developed through feature extraction and pattern recognition. The method is validated through its application to a set of volunteers, with different ages and genres, in such way that different conditions of body temperature and vein depth are covered for the applicability and reproducibility of the method.
Static and dynamic postural control in low-vision and normal-vision adults.
Tomomitsu, Mônica S V; Alonso, Angelica Castilho; Morimoto, Eurica; Bobbio, Tatiana G; Greve, Julia M D
2013-04-01
This study aimed to evaluate the influence of reduced visual information on postural control by comparing low-vision and normal-vision adults in static and dynamic conditions. Twenty-five low-vision subjects and twenty-five normal sighted adults were evaluated for static and dynamic balance using four protocols: 1) the Modified Clinical Test of Sensory Interaction on Balance on firm and foam surfaces with eyes opened and closed; 2) Unilateral Stance with eyes opened and closed; 3) Tandem Walk; and 4) Step Up/Over. The results showed that the low-vision group presented greater body sway compared with the normal vision during balance on a foam surface (p≤0.001), the Unilateral Stance test for both limbs (p≤0.001), and the Tandem Walk test. The low-vision group showed greater step width (p≤0.001) and slower gait speed (p≤0.004). In the Step Up/Over task, low-vision participants were more cautious in stepping up (right p≤0.005 and left p≤0.009) and in executing the movement (p≤0.001). These findings suggest that visual feedback is crucial for determining balance, especially for dynamic tasks and on foam surfaces. Low-vision individuals had worse postural stability than normal-vision adults in terms of dynamic tests and balance on foam surfaces.
Low vision in east African blind school students: need for optical low vision services.
Silver, J; Gilbert, C E; Spoerer, P; Foster, A
1995-09-01
There is increasing awareness of the needs of children with low vision, particularly in developing countries where programmes of integrated education are being developed. However, appropriate low vision services are usually not available or affordable. The aims of this study were, firstly, to assess the need for spectacles and optical low vision devices in students with low vision in schools for the blind in Kenya and Uganda; secondly, to evaluate inexpensive locally produced low vision devices; and, finally, to evaluate simple methods of identifying those low vision students who could read N5 to N8 print after low vision assessment. A total of 230 students were examined (51 school and 16 university students in Uganda and 163 students in Kenya, aged 5-22 years), 147 of whom had a visual acuity of less than 6/18 to perception of light in the better eye at presentation. After refraction seven of the 147 achieved 6/18 or better. Eighty two (58.6%) of the 140 students with low vision (corrected visual acuity in the better eye of less than 6/18 to light perception) had refractive errors of more than 2 dioptres in the better eye, and 38 (27.1%) had more than 2 dioptres of astigmatism. Forty six per cent of students with low vision (n = 64) could read N5-N8 print unaided or with spectacles, as could a further 33% (n = 46) with low vision devices. Low vision devices were indicated in a total of 50 students (35.7%). The locally manufactured devices could meet two thirds of the need. A corrected distance acuity of 1/60 or better had a sensitivity of 99.1% and a specificity of 56.7% in predicting the ability to discern N8 print or better. The ability to perform at least two of the three simple tests of functional vision had a sensitivity of 95.5% and a specificity of 63.3% in identifying the students able to discern N8 or better.
Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe
2013-06-01
Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kardava, Irakli; Tadyszak, Krzysztof; Gulua, Nana; Jurga, Stefan
2017-02-01
For more flexibility of environmental perception by artificial intelligence it is needed to exist the supporting software modules, which will be able to automate the creation of specific language syntax and to make a further analysis for relevant decisions based on semantic functions. According of our proposed approach, of which implementation it is possible to create the couples of formal rules of given sentences (in case of natural languages) or statements (in case of special languages) by helping of computer vision, speech recognition or editable text conversion system for further automatic improvement. In other words, we have developed an approach, by which it can be achieved to significantly improve the training process automation of artificial intelligence, which as a result will give us a higher level of self-developing skills independently from us (from users). At the base of our approach we have developed a software demo version, which includes the algorithm and software code for the entire above mentioned component's implementation (computer vision, speech recognition and editable text conversion system). The program has the ability to work in a multi - stream mode and simultaneously create a syntax based on receiving information from several sources.
Promoting vision and hearing aids use in an intensive care unit.
Zhou, Qiaoling; Faure Walker, Nicholas
2015-01-01
Vision and hearing impairments have long been recognised as modifiable risk factors for delirium.[1,2,3] Delirium in critically ill patients is a frequent complication (reported as high as 60% to 80% of intensive care patients), and is associated with a three-fold increase in mortality and prolonged hospital stay.[1] Guidelines by the UK Clinical Pharmacy Association recommend minimising risk factors to prevent delirium, rather than to treat it with pharmacological agents which may themselves cause delirium.[4] To address risk factors is a measure of multi-system management, such as sleep-wake cycle correction, orientation and use of vision and hearing aids, etc.[5] We designed an audit to survey the prevalence and availability of vision and hearing aids use in the intensive care unit (ICU) of one university hospital. The baseline data demonstrated a high level of prevalence and low level of availability of vision /hearing aid use. We implemented changes to the ICU Innovian assessment system, which serves to remind nursing staff performing daily checks on delirium reduction measures. This has improved practice in promoting vision and hearing aids use in ICU as shown by re-audit at six month. Further amendments to the Innovian risk assessments have increased the rate of assessment to 100% and vision aid use to near 100%.
Özen Tunay, Zuhal; Çalışkan, Deniz; İdil, Aysun; Öztuna, Derya
2016-01-01
Objectives: To determine the clinical features and the distribution of diagnosis in partially sighted school-age children, to report the chosen low vision rehabilitation methods and to emphasize the importance of low vision rehabilitation. Materials and Methods: The study included 150 partially sighted children between the ages of 6 and 18 years. The distribution of diagnosis, accompanying ocular findings, visual acuity of the children both for near and distance with and without low vision devices, and the methods of low vision rehabilitation (for distance and for near) were determined. The demographic characteristics of the children and the parental consanguinity were recorded. Results: The mean age of children was 10.6 years and the median age was 10 years; 88 (58.7%) of them were male and 62 (41.3%) of them were female. According to distribution of diagnoses among the children, the most frequent diagnosis was hereditary fundus dystrophies (36%) followed by cortical visual impairment (18%). The most frequently used rehabilitation methods were: telescopic lenses (91.3%) for distance vision; magnifiers (38.7%) and telemicroscopic systems (26.0%) for near vision. A significant improvement in visual acuity both for distance and near vision were determined with low vision aids. Conclusion: A significant improvement in visual acuity can be achieved both for distance and near vision with low vision rehabilitation in partially sighted school-age children. It is important for ophthalmologists and pediatricians to guide parents and children to low vision rehabilitation. PMID:27800263
Özen Tunay, Zuhal; Çalışkan, Deniz; İdil, Aysun; Öztuna, Derya
2016-04-01
To determine the clinical features and the distribution of diagnosis in partially sighted school-age children, to report the chosen low vision rehabilitation methods and to emphasize the importance of low vision rehabilitation. The study included 150 partially sighted children between the ages of 6 and 18 years. The distribution of diagnosis, accompanying ocular findings, visual acuity of the children both for near and distance with and without low vision devices, and the methods of low vision rehabilitation (for distance and for near) were determined. The demographic characteristics of the children and the parental consanguinity were recorded. The mean age of children was 10.6 years and the median age was 10 years; 88 (58.7%) of them were male and 62 (41.3%) of them were female. According to distribution of diagnoses among the children, the most frequent diagnosis was hereditary fundus dystrophies (36%) followed by cortical visual impairment (18%). The most frequently used rehabilitation methods were: telescopic lenses (91.3%) for distance vision; magnifiers (38.7%) and telemicroscopic systems (26.0%) for near vision. A significant improvement in visual acuity both for distance and near vision were determined with low vision aids. A significant improvement in visual acuity can be achieved both for distance and near vision with low vision rehabilitation in partially sighted school-age children. It is important for ophthalmologists and pediatricians to guide parents and children to low vision rehabilitation.
AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.
Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott
2014-11-01
This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.
Effects of the Abnormal Acceleratory Environment of Flight
1974-12-01
vision Return of arteriolar pulsa- tion and temporary venous distension Visual failure is a continuum from loss of peripheral vision (grey- out) to...distance); intrathoracic pressure is increased by strong muscular expiratorv efforts against a partially closed glottis; and the contraction of...vigorous skeletal muscular tensing (Valsalva maneuver) can reduce +GZ tolerance and lead to an episode of unconsciousness at extremely low G levels
21 CFR 886.5870 - Low-vision telescope.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Low-vision telescope. 886.5870 Section 886.5870...) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5870 Low-vision telescope. (a) Identification. A low-vision telescope is a device that consists of an arrangement of lenses or mirrors intended for...
21 CFR 886.5870 - Low-vision telescope.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Low-vision telescope. 886.5870 Section 886.5870...) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5870 Low-vision telescope. (a) Identification. A low-vision telescope is a device that consists of an arrangement of lenses or mirrors intended for...
21 CFR 886.5870 - Low-vision telescope.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Low-vision telescope. 886.5870 Section 886.5870...) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5870 Low-vision telescope. (a) Identification. A low-vision telescope is a device that consists of an arrangement of lenses or mirrors intended for...
21 CFR 886.5870 - Low-vision telescope.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Low-vision telescope. 886.5870 Section 886.5870...) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5870 Low-vision telescope. (a) Identification. A low-vision telescope is a device that consists of an arrangement of lenses or mirrors intended for...
21 CFR 886.5870 - Low-vision telescope.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Low-vision telescope. 886.5870 Section 886.5870...) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5870 Low-vision telescope. (a) Identification. A low-vision telescope is a device that consists of an arrangement of lenses or mirrors intended for...
NASA Astrophysics Data System (ADS)
Zhang, Liandong; Bai, Xiaofeng; Song, De; Fu, Shencheng; Li, Ye; Duanmu, Qingduo
2015-03-01
Low-light-level night vision technology is magnifying low light level signal large enough to be seen by naked eye, which uses the photons - photoelectron as information carrier. Until the micro-channel plate was invented, it has been possibility for the realization of high performance and miniaturization of low-light-level night vision device. The device is double-proximity focusing low-light-level image intensifier which places a micro-channel plate close to photocathode and phosphor screen. The advantages of proximity focusing low-light-level night vision are small size, light weight, small power consumption, no distortion, fast response speed, wide dynamic range and so on. It is placed parallel to each other for Micro-channel plate (both sides of it with metal electrode), the photocathode and the phosphor screen are placed parallel to each other. The voltage is applied between photocathode and the input of micro-channel plate when image intensifier works. The emission electron excited by photo on the photocathode move towards to micro-channel plate under the electric field in 1st proximity focusing region, and then it is multiplied through the micro-channel. The movement locus of emission electrons can be calculated and simulated when the distributions of electrostatic field equipotential lines are determined in the 1st proximity focusing region. Furthermore the resolution of image tube can be determined. However the distributions of electrostatic fields and equipotential lines are complex due to a lot of micro-channel existing in the micro channel plate. This paper simulates electrostatic distribution of 1st proximity region in double-proximity focusing low-light-level image intensifier with the finite element simulation analysis software Ansoft maxwell 3D. The electrostatic field distributions of 1st proximity region are compared when the micro-channel plates' pore size, spacing and inclination angle ranged. We believe that the electron beam movement trajectory in 1st proximity region will be better simulated when the electronic electrostatic fields are simulated.
Color line scan camera technology and machine vision: requirements to consider
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.
1997-08-01
Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.
Khanna, Anjani
2012-01-01
A large number of glaucoma patients suffer from vision impairments that qualify as low vision. Additional difficulties associated with low vision include problems with glare, lighting, and contrast, which can make daily activities extremely challenging. This article elaborates on how low vision aids can help with various tasks that visually impaired glaucoma patients need to do each day, to take care of themselves and to lead an independent life. PMID:27990068
Wearable optical-digital assistive device for low vision students.
Afinogenov, Boris I; Coles, James B; Parthasarathy, Sailashri; Press-Williams, Jessica; Tsykunova, Ralina; Vasilenko, Anastasia; Narain, Jaya; Hanumara, Nevan C; Winter, Amos; Satgunam, PremNandhini
2016-08-01
People with low vision have limited residual vision that can be greatly enhanced through high levels of magnification. Current assistive technologies are tailored for far field or near field magnification but not both. In collaboration with L.V. Prasad Eye Institute (LVPEI), a wearable, optical-digital assistive device was developed to meet the near and far field magnification needs of students. The critical requirements, system architecture and design decisions for each module were analyzed and quantified. A proof-of-concept prototype was fabricated that can achieve magnification up to 8x and a battery life of up to 8 hours. Potential user evaluation with a Snellen chart showed identification of characters not previously discernible. Further feedback suggested that the system could be used as a general accessibility aid.
McMullan, Keri S; Butler, Mary
2018-05-09
Older adults with low vision are a growing population with rehabilitation needs including support with community mobility to enable community participation. Some older adults with low vision choose to use mobility scooters to mobilize within their community, but there is limited research about the use by people with low vision. This paper describes a pilot study and asks the question: what are the experiences of persons with low vision who use mobility scooters? This study gathered the experiences of four participants with low vision, aged 51 and over, who regularly use mobility scooters. Diverse methods were used including a go-along, a semi-structured interview and a new measure of functional vision for mobility called the vision-related outcomes in orientation and mobility (VROOM). Four themes were found to describe experiences: autonomy and well-being, accessibility, community interactions and self-regulation. Discussion and implications: This study was a pilot for a larger study examining self-regulation in scooter users. However, as roles emerge for health professionals and scooters, the findings also provide evidence to inform practice, because it demonstrates the complex meaning and influences on performance involved in low vision mobility scooter use. Implications for rehabilitation Scooter use supports autonomy and well-being and community connections for individuals with both mobility and visual impairments. Low vision scooter users demonstrate self-regulation of their scooter use to manage both their visual and environmental limitations. Issues of accessibility experienced by this sample affect a wider community of footpath users, emphasizing the need for councils to address inadequate infrastructure. Rehabilitators can support their low vision clients' scooter use by acknowledging issues of accessibility and promoting self-regulation strategies to manage risks and barriers.
Toward a Computational Neuropsychology of High-Level Vision.
1984-08-20
known as visual agnosia ’ (also called "mindblindness’)l this patient failed to *recognize her nurses, got lost frequently when travelling familiar routes...visual agnosia are not blind: these patients can compare two shapes reliably when Computational neuropsychology 16 both are visible, but they cannot...visually recognize what an object is (although many can recognize objects by touch). This sort of agnosia has been well-documented in the literature (see
Effects of contour enhancement on low-vision preference and visual search.
Satgunam, Premnandhini; Woods, Russell L; Luo, Gang; Bronstad, P Matthew; Reynolds, Zachary; Ramachandra, Chaithanya; Mel, Bartlett W; Peli, Eli
2012-09-01
To determine whether image enhancement improves visual search performance and whether enhanced images were also preferred by subjects with vision impairment. Subjects (n = 24) with vision impairment (vision: 20/52 to 20/240) completed visual search and preference tasks for 150 static images that were enhanced to increase object contours' visual saliency. Subjects were divided into two groups and were shown three enhancement levels. Original and medium enhancements were shown to both groups. High enhancement was shown to group 1, and low enhancement was shown to group 2. For search, subjects pointed to an object that matched a search target displayed at the top left of the screen. An "integrated search performance" measure (area under the curve of cumulative correct response rate over search time) quantified performance. For preference, subjects indicated the preferred side when viewing the same image with different enhancement levels on side-by-side high-definition televisions. Contour enhancement did not improve performance in the visual search task. Group 1 subjects significantly (p < 0.001) rejected the High enhancement, and showed no preference for medium enhancement over the original images. Group 2 subjects significantly preferred (p < 0.001) both the medium and the low enhancement levels over original. Contrast sensitivity was correlated with both preference and performance; subjects with worse contrast sensitivity performed worse in the search task (ρ = 0.77, p < 0.001) and preferred more enhancement (ρ = -0.47, p = 0.02). No correlation between visual search performance and enhancement preference was found. However, a small group of subjects (n = 6) in a narrow range of mid-contrast sensitivity performed better with the enhancement, and most (n = 5) also preferred the enhancement. Preferences for image enhancement can be dissociated from search performance in people with vision impairment. Further investigations are needed to study the relationships between preference and performance for a narrow range of mid-contrast sensitivity where a beneficial effect of enhancement may exist.
NASA Technical Reports Server (NTRS)
Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.
1981-01-01
The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.
Chatterjee, Pranab Kr; Bairagi, Debasis; Roy, Sudipta; Majumder, Nilay Kr; Paul, Ratish Ch; Bagchi, Sunil Ch
2005-07-01
A comparative double-blind placebo-controlled clinical trial of a herbal eye drop (itone) was conducted to find out its efficacy and safety in 120 patients with computer vision syndrome. Patients using computers for more than 3 hours continuously per day having symptoms of watering, redness, asthenia, irritation, foreign body sensation and signs of conjunctival hyperaemia, corneal filaments and mucus were studied. One hundred and twenty patients were randomly given either placebo, tears substitute (tears plus) or itone in identical vials with specific code number and were instructed to put one drop four times daily for 6 weeks. Subjective and objective assessments were done at bi-weekly intervals. In computer vision syndrome both subjective and objective improvements were noticed with itone drops. Itone drop was found significantly better than placebo (p<0.01) and almost identical results were observed with tears plus (difference was not statistically significant). Itone is considered to be a useful drug in computer vision syndrome.
Real-time tracking using stereo and motion: Visual perception for space robotics
NASA Technical Reports Server (NTRS)
Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann
1994-01-01
The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.
21 CFR 886.5540 - Low-vision magnifier.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Low-vision magnifier. 886.5540 Section 886.5540...) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5540 Low-vision magnifier. (a) Identification. A low-vision magnifier is a device that consists of a magnifying lens intended for use by a patient...
21 CFR 886.5540 - Low-vision magnifier.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Low-vision magnifier. 886.5540 Section 886.5540...) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5540 Low-vision magnifier. (a) Identification. A low-vision magnifier is a device that consists of a magnifying lens intended for use by a patient...
21 CFR 886.5540 - Low-vision magnifier.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Low-vision magnifier. 886.5540 Section 886.5540...) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5540 Low-vision magnifier. (a) Identification. A low-vision magnifier is a device that consists of a magnifying lens intended for use by a patient...
21 CFR 886.5540 - Low-vision magnifier.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Low-vision magnifier. 886.5540 Section 886.5540...) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5540 Low-vision magnifier. (a) Identification. A low-vision magnifier is a device that consists of a magnifying lens intended for use by a patient...
21 CFR 886.5540 - Low-vision magnifier.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Low-vision magnifier. 886.5540 Section 886.5540...) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5540 Low-vision magnifier. (a) Identification. A low-vision magnifier is a device that consists of a magnifying lens intended for use by a patient...
Low Vision Rehabilitation in a Nursing Home Population: The SEEING Study
ERIC Educational Resources Information Center
Deremeik, James; Broman, Aimee T.; Friedman, David; West, Sheila K.; Massof, Robert; Park, William; Bandeen-Roche, Karen; Frick, Kevin; Munoz, Beatriz
2007-01-01
As part of a study of 198 residents with low vision in 28 nursing homes, 91 participated in a low vision rehabilitation intervention. Among the rehabilitation participants, 78% received simple environmental modifications, such as lighting; 75% received low vision instruction; 73% benefited from staff training; and 69% received simple nonoptical…
Integrating Mobile Robotics and Vision with Undergraduate Computer Science
ERIC Educational Resources Information Center
Cielniak, G.; Bellotto, N.; Duckett, T.
2013-01-01
This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhr, L.
1987-01-01
This book is written by research scientists involved in the development of massively parallel, but hierarchically structured, algorithms, architectures, and programs for image processing, pattern recognition, and computer vision. The book gives an integrated picture of the programs and algorithms that are being developed, and also of the multi-computer hardware architectures for which these systems are designed.
Rationale, Design and Implementation of a Computer Vision-Based Interactive E-Learning System
ERIC Educational Resources Information Center
Xu, Richard Y. D.; Jin, Jesse S.
2007-01-01
This article presents a schematic application of computer vision technologies to e-learning that is synchronous, peer-to-peer-based, and supports an instructor's interaction with non-computer teaching equipments. The article first discusses the importance of these focused e-learning areas, where the properties include accurate bidirectional…
Computer Vision Assisted Virtual Reality Calibration
NASA Technical Reports Server (NTRS)
Kim, W.
1999-01-01
A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.
Sensor Control of Robot Arc Welding
NASA Technical Reports Server (NTRS)
Sias, F. R., Jr.
1983-01-01
The potential for using computer vision as sensory feedback for robot gas-tungsten arc welding is investigated. The basic parameters that must be controlled while directing the movement of an arc welding torch are defined. The actions of a human welder are examined to aid in determining the sensory information that would permit a robot to make reproducible high strength welds. Special constraints imposed by both robot hardware and software are considered. Several sensory modalities that would potentially improve weld quality are examined. Special emphasis is directed to the use of computer vision for controlling gas-tungsten arc welding. Vendors of available automated seam tracking arc welding systems and of computer vision systems are surveyed. An assessment is made of the state of the art and the problems that must be solved in order to apply computer vision to robot controlled arc welding on the Space Shuttle Main Engine.
Tracking by Identification Using Computer Vision and Radio
Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez
2013-01-01
We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485
NASA Technical Reports Server (NTRS)
Takallu, M. A.; Wong, D. T.; Uenking, M. D.
2002-01-01
An experimental investigation was conducted to study the effectiveness of modern flight displays in general aviation cockpits for mitigating Low Visibility Loss of Control and the Controlled Flight Into Terrain accidents. A total of 18 General Aviation (GA) pilots with private pilot, single engine land rating, with no additional instrument training beyond private pilot license requirements, were recruited to evaluate three different display concepts in a fixed-based flight simulator at the NASA Langley Research Center's General Aviation Work Station. Evaluation pilots were asked to continue flight from Visual Meteorological Conditions (VMC) into Instrument Meteorological Conditions (IMC) while performing a series of 4 basic precision maneuvers. During the experiment, relevant pilot/vehicle performance variables, pilot control inputs and physiological data were recorded. Human factors questionnaires and interviews were administered after each scenario. Qualitative and quantitative data have been analyzed and the results are presented here. Pilot performance deviations from the established target values (errors) were computed and compared with the FAA Practical Test Standards. Results of the quantitative data indicate that evaluation pilots committed substantially fewer errors when using the Synthetic Vision Systems (SVS) displays than when they were using conventional instruments. Results of the qualitative data indicate that evaluation pilots perceived themselves to have a much higher level of situation awareness while using the SVS display concept.
Investigating Architectural Issues in Neuromorphic Computing
2012-05-01
term grasp. Some of these include learning, vision , audition and olfaction , ability to navigate an environment, and goal seeking. These abilities have...17 Figure 14: Word/sentence level accuracy versus the ambiguity: (a) Word accuracy vs . letter ambiguity, (b) (b) Sentence...accuracy vs . letter ambiguity, and (c) (b) Sentence accuracy vs . word ambiguity
Clinical characteristics and causes of visual impairment in a low vision clinic in northern Jordan.
Bakkar, May M; Alzghoul, Eman A; Haddad, Mera F
2018-01-01
The aim of the study was to identify causes of visual impairment among patients attending a low vision clinic in the north of Jordan and to study the relevant demographic characteristics of these patients. The retrospective study was conducted through a review of clinical records of 135 patients who attended a low vision clinic in Irbid. Clinical characteristics of the patients were collected, including age, gender, primary cause of low vision, best corrected visual acuity, and current prescribed low vision aids. Descriptive statistics analysis using numbers and percentages were calculated to summarize categorical and nominal data. A total of 135 patients (61 [45.2%] females and 74 [54.8%] males) were recruited in the study. Mean age ± standard deviation for the study population was 24.53 ± 16.245 years; age range was 5-90 years. Of the study population, 26 patients (19.3%) had mild visual impairment, 61 patients (45.2%) had moderate visual impairment, 27 patients (20.0%) had severe visual impairment, and 21 patients (15.6%) were blind. The leading causes of visual impairment across all age groups were albinism (31.9%) and retinitis pigmentosa (RP) (18.5%). Albinism also accounted for the leading cause of visual impairment among the pediatric age group (0-15 years) while albinism, RP, and keratoconus were the primary causes of visual impairment for older patients. A total of 59 patients (43.7%) were given low vision aids either for near or distance. The only prescribed low vision aids for distances were telescopes. For near, spectacle-type low vision aid was the most commonly prescribed low vision aids. Low vision services in Jordan are still very limited. A national strategy programme to increase awareness of low vision services should be implemented, and health care policies should be enforced to cover low vision aids through the national medical insurance.
Clinical characteristics and causes of visual impairment in a low vision clinic in northern Jordan
Bakkar, May M; Alzghoul, Eman A; Haddad, Mera F
2018-01-01
Aim The aim of the study was to identify causes of visual impairment among patients attending a low vision clinic in the north of Jordan and to study the relevant demographic characteristics of these patients. Subjects and methods The retrospective study was conducted through a review of clinical records of 135 patients who attended a low vision clinic in Irbid. Clinical characteristics of the patients were collected, including age, gender, primary cause of low vision, best corrected visual acuity, and current prescribed low vision aids. Descriptive statistics analysis using numbers and percentages were calculated to summarize categorical and nominal data. Results A total of 135 patients (61 [45.2%] females and 74 [54.8%] males) were recruited in the study. Mean age ± standard deviation for the study population was 24.53 ± 16.245 years; age range was 5–90 years. Of the study population, 26 patients (19.3%) had mild visual impairment, 61 patients (45.2%) had moderate visual impairment, 27 patients (20.0%) had severe visual impairment, and 21 patients (15.6%) were blind. The leading causes of visual impairment across all age groups were albinism (31.9%) and retinitis pigmentosa (RP) (18.5%). Albinism also accounted for the leading cause of visual impairment among the pediatric age group (0–15 years) while albinism, RP, and keratoconus were the primary causes of visual impairment for older patients. A total of 59 patients (43.7%) were given low vision aids either for near or distance. The only prescribed low vision aids for distances were telescopes. For near, spectacle-type low vision aid was the most commonly prescribed low vision aids. Conclusion Low vision services in Jordan are still very limited. A national strategy programme to increase awareness of low vision services should be implemented, and health care policies should be enforced to cover low vision aids through the national medical insurance. PMID:29662299
Face sketch recognition based on edge enhancement via deep learning
NASA Astrophysics Data System (ADS)
Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
2017-11-01
In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III
2007-01-01
NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.
A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors
Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres
2016-01-01
Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382
Computer interfaces for the visually impaired
NASA Technical Reports Server (NTRS)
Higgins, Gerry
1991-01-01
Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired.
Are children with low vision adapted to the visual environment in classrooms of mainstream schools?
Negiloni, Kalpa; Ramani, Krishna Kumar; Jeevitha, R; Kalva, Jayashree; Sudhir, Rachapalle Reddi
2018-02-01
The study aimed to evaluate the classroom environment of children with low vision and provide recommendations to reduce visual stress, with focus on mainstream schooling. The medical records of 110 children (5-17 years) seen in low vision clinic during 1 year period (2015) at a tertiary care center in south India were extracted. The visual function levels of children were compared to the details of their classroom environment. The study evaluated and recommended the chalkboard visual task size and viewing distance required for children with mild, moderate, and severe visual impairment (VI). The major causes of low vision based on the site of abnormality and etiology were retinal (80%) and hereditary (67%) conditions, respectively, in children with mild (n = 18), moderate (n = 72), and severe (n = 20) VI. Many of the children (72%) had difficulty in viewing chalkboard and common strategies used for better visibility included copying from friends (47%) and going closer to chalkboard (42%). To view the chalkboard with reduced visual stress, a child with mild VI can be seated at a maximum distance of 4.3 m from the chalkboard, with the minimum size of visual task (height of lowercase letter writing on chalkboard) recommended to be 3 cm. For 3/60-6/60 range, the maximum viewing distance with the visual task size of 4 cm is recommended to be 85 cm to 1.7 m. Simple modifications of the visual task size and seating arrangements can aid children with low vision with better visibility of chalkboard and reduced visual stress to manage in mainstream schools.
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
Color vision impairment with low-level methylmercury exposure of an Amazonian population - Brazil.
Feitosa-Santana, Claudia; Souza, Givago da Silva; Sirius, Esaú Ventura Pupo; Rodrigues, Anderson Raiol; Cortes, Maria Izabel Tentes; Silveira, Luiz Carlos de Lima; Ventura, Dora Fix
2018-05-01
Land exploitation that follows deforestation and mining can result in soil erosion and the release of mercury to the waters of rivers in the Amazon Basin. Inorganic mercury is methylated by bacteria that are present in the environment and it serves as a source of human contamination through fish consumption in the form of methylmercury. Long-term exposure to low-level methylmercury in the riverside populations can lead to nervous system alterations, some of which are visual impairments such as loss of luminance contrast sensitivity, restricted visual fields and color vision defects. The present study sought to examine color vision in a group of adults living in the central Brazilian Amazon who were exposed to low-levels of methylmercury. Total Hg concentrations were measured from hair collected at the time of the testing. The D15d and FM100 color vision arrangement tests were applied in a population of 36 (22 males) and 42 (25 males), respectively. Controls were healthy volunteers from the cities of São Paulo for the D15d and Belém for the FM100. There was a statistically significant difference in performance between those who were exposed and controls for both tests (p < 0.01 and p < 0.0001, respectively, Mann-Whitney U test), meaning that adults living in this region of the Amazon made more mistakes on both tests when compared to controls. A linear regression was performed using Hg concentrations and test scores. Hg concentrations accounted for 7% and 2% of color D15d and FM100 arrangement test errors, respectively. Although other studies have previously found color vision impairment in the Amazon, they tested inhabitants on the east side of the Amazon, while this study was conducted in the central Amazon region and it is the first study in a population with no direct contact with the Hg source of contamination. These results suggest that long-term exposure to low-level methylmercury in riverside populations is more widely spread in the Amazon Basin than previously reported. This information is needed to implement public health policies that will ensure a safer environment for the Amazonian population. Copyright © 2018 Elsevier B.V. All rights reserved.
Colour calibration of a laboratory computer vision system for quality evaluation of pre-sliced hams.
Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul
2009-01-01
Due to the high variability and complex colour distribution in meats and meat products, the colour signal calibration of any computer vision system used for colour quality evaluations, represents an essential condition for objective and consistent analyses. This paper compares two methods for CIE colour characterization using a computer vision system (CVS) based on digital photography; namely the polynomial transform procedure and the transform proposed by the sRGB standard. Also, it presents a procedure for evaluating the colour appearance and presence of pores and fat-connective tissue on pre-sliced hams made from pork, turkey and chicken. Our results showed high precision, in colour matching, for device characterization when the polynomial transform was used to match the CIE tristimulus values in comparison with the sRGB standard approach as indicated by their ΔE(ab)(∗) values. The [3×20] polynomial transfer matrix yielded a modelling accuracy averaging below 2.2 ΔE(ab)(∗) units. Using the sRGB transform, high variability was appreciated among the computed ΔE(ab)(∗) (8.8±4.2). The calibrated laboratory CVS, implemented with a low-cost digital camera, exhibited reproducible colour signals in a wide range of colours capable of pinpointing regions-of-interest and allowed the extraction of quantitative information from the overall ham slice surface with high accuracy. The extracted colour and morphological features showed potential for characterizing the appearance of ham slice surfaces. CVS is a tool that can objectively specify colour and appearance properties of non-uniformly coloured commercial ham slices.
Toothguide Trainer tests with color vision deficiency simulation monitor.
Borbély, Judit; Varsányi, Balázs; Fejérdy, Pál; Hermann, Péter; Jakstat, Holger A
2010-01-01
The aim of this study was to evaluate whether simulated severe red and green color vision deficiency (CVD) influenced color matching results and to investigate whether training with Toothguide Trainer (TT) computer program enabled better color matching results. A total of 31 color normal dental students participated in the study. Every participant had to pass the Ishihara Test. Participants with a red/green color vision deficiency were excluded. A lecture on tooth color matching was given, and individual training with TT was performed. To measure the individual tooth color matching results in normal and color deficient display modes, the TT final exam was displayed on a calibrated monitor that served as a hardware-based method of simulating protanopy and deuteranopy. Data from the TT final exams were collected in normal and in severe red and green CVD-simulating monitor display modes. Color difference values for each participant in each display mode were computed (∑ΔE(ab)(*)), and the respective means and standard deviations were calculated. The Student's t-test was used in statistical evaluation. Participants made larger ΔE(ab)(*) errors in severe color vision deficient display modes than in the normal monitor mode. TT tests showed significant (p<0.05) difference in the tooth color matching results of severe green color vision deficiency simulation mode compared to normal vision mode. Students' shade matching results were significantly better after training (p=0.009). Computer-simulated severe color vision deficiency mode resulted in significantly worse color matching quality compared to normal color vision mode. Toothguide Trainer computer program improved color matching results. Copyright © 2010 Elsevier Ltd. All rights reserved.
A computer simulation experiment of supervisory control of remote manipulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Mccandlish, S. G.
1966-01-01
A computer simulation of a remote manipulation task and a rate-controlled manipulator is described. Some low-level automatic decision making ability which could be used at the operator's discretion to augment his direct continuous control was built into the manipulator. Experiments were made on the effect of transmission delay, dynamic lag, and intermittent vision on human manipulative ability. Delay does not make remote manipulation impossible. Intermittent visual feedback, and the absence of rate information in the display presented to the operator do not seem to impair the operator's performance. A small-capacity visual feedback channel may be sufficient for remote manipulation tasks, or one channel might be time-shared between several operators. In other experiments the operator called in sequence various on-site automatic control programs of the machine, and thereby acted as a supervisor. The supervisory mode of operation has some advantages when the task to be performed is difficult for a human controlling directly.
ERIC Educational Resources Information Center
Tucker, Laurel A.
2004-01-01
Adults with low vision who seek clinical low vision services need to be able to read (that is, to interpret or understand words, numbers, and symbols in print meaningfully). Reading difficulties that adults encounter during low vision therapy may be directly connected to a visual impairment or may be related to other reading problems, such as…
[Meibomian gland disfunction in computer vision syndrome].
Pimenidi, M K; Polunin, G S; Safonova, T N
2010-01-01
This article reviews ethiology and pathogenesis of dry eye syndrome due to meibomian gland disfunction (MDG). It is showed that blink rate influences meibomian gland functioning and computer vision syndrome development. Current diagnosis and treatment options of MDG are presented.
Project Magnify: Increasing Reading Skills in Students with Low Vision
ERIC Educational Resources Information Center
Farmer, Jeanie; Morse, Stephen E.
2007-01-01
Modeled after Project PAVE (Corn et al., 2003) in Tennessee, Project Magnify is designed to test the idea that students with low vision who use individually prescribed magnification devices for reading will perform as well as or better than students with low vision who use large-print reading materials. Sixteen students with low vision were…
Integrated navigation, flight guidance, and synthetic vision system for low-level flight
NASA Astrophysics Data System (ADS)
Mehler, Felix E.
2000-06-01
Future military transport aircraft will require a new approach with respect to the avionics suite to fulfill an ever-changing variety of missions. The most demanding phases of these mission are typically the low level flight segments, including tactical terrain following/avoidance,payload drop and/or board autonomous landing at forward operating strips without ground-based infrastructure. As a consequence, individual components and systems must become more integrated to offer a higher degree of reliability, integrity, flexibility and autonomy over existing systems while reducing crew workload. The integration of digital terrain data not only introduces synthetic vision into the cockpit, but also enhances navigation and guidance capabilities. At DaimlerChrysler Aerospace AG Military Aircraft Division (Dasa-M), an integrated navigation, flight guidance and synthetic vision system, based on digital terrain data, has been developed to fulfill the requirements of the Future Transport Aircraft (FTA). The fusion of three independent navigation sensors provides a more reliable and precise solution to both the 4D-flight guidance and the display components, which is comprised of a Head-up and a Head-down Display with synthetic vision. This paper will present the system, its integration into the DLR's VFW 614 Advanced Technology Testing Aircraft System (ATTAS) and the results of the flight-test campaign.
Analog "neuronal" networks in early vision.
Koch, C; Marroquin, J; Yuille, A
1986-01-01
Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172
Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera
NASA Astrophysics Data System (ADS)
Dziri, Aziz; Duranton, Marc; Chapuis, Roland
2016-07-01
Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.
[Quality of life of visually impaired adults after low-vision intervention: a pilot study].
Fintz, A-C; Gottenkiene, S; Speeg-Schatz, C
2011-10-01
To demonstrate the benefits of a low-vision intervention upon the quality of life of visually disabled adults. The survey was proposed to patients who sought a low-vision intervention at the Colmar and Strasbourg hospital centres over a period of 9 months. Patients in agreement with the survey were asked to complete the 25-item National Eye Institute Visual Function Questionnaire (NEI VFQ25) in interview format by telephone, once they had attended the first meeting and again 2 months after the end of the low-vision intervention. The low-vision intervention led to overall improvement as judged by the 25 items of the questionnaire. Some items involving visual function and psychological issues showed significant benefits: the patients reported a more optimistic score concerning their general vision, described better nearby activities, and felt a bit more autonomous. More than mainstream psychological counselling, low-vision services help patients cope with visual disabilities during their daily life. The low-vision intervention improves physical and technical issues necessary to retaining autonomy in daily life. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Friedman, Robert J; Gutkowicz-Krusin, Dina; Farber, Michele J; Warycha, Melanie; Schneider-Kels, Lori; Papastathis, Nicole; Mihm, Martin C; Googe, Paul; King, Roy; Prieto, Victor G; Kopf, Alfred W; Polsky, David; Rabinovitz, Harold; Oliviero, Margaret; Cognetta, Armand; Rigel, Darrell S; Marghoob, Ashfaq; Rivers, Jason; Johr, Robert; Grant-Kels, Jane M; Tsao, Hensin
2008-04-01
To evaluate the performance of dermoscopists in diagnosing small pigmented skin lesions (diameter = 6 mm) compared with an automatic multispectral computer-vision system. Blinded comparison study. Dermatologic hospital-based clinics and private practice offices. Patients From a computerized skin imaging database of 990 small (= 6-mm) pigmented skin lesions, all 49 melanomas from 49 patients were included in this study. Fifty randomly selected nonmelanomas from 46 patients served as a control. Ten dermoscopists independently examined dermoscopic images of 99 pigmented skin lesions and decided whether they identified the lesions as melanoma and whether they would recommend biopsy to rule out melanoma. Diagnostic and biopsy sensitivity and specificity were computed and then compared with the results of the computer-vision system. Dermoscopists were able to correctly identify small melanomas with an average diagnostic sensitivity of 39% and a specificity of 82% and recommended small melanomas for biopsy with a sensitivity of 71% and specificity of 49%, with only fair interobserver agreement (kappa = 0.31 for diagnosis and 0.34 for biopsy). In comparison, in recommending biopsy to rule out melanoma, the computer-vision system achieved 98% sensitivity and 44% specificity. Differentiation of small melanomas from small benign pigmented lesions challenges even expert physicians. Computer-vision systems can facilitate early detection of small melanomas and may limit the number of biopsies to rule out melanoma performed on benign lesions.
Humans and Deep Networks Largely Agree on Which Kinds of Variation Make Object Recognition Harder.
Kheradpisheh, Saeed R; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée
2016-01-01
View-invariant object recognition is a challenging problem that has attracted much attention among the psychology, neuroscience, and computer vision communities. Humans are notoriously good at it, even if some variations are presumably more difficult to handle than others (e.g., 3D rotations). Humans are thought to solve the problem through hierarchical processing along the ventral stream, which progressively extracts more and more invariant visual features. This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best models for object recognition in natural images. Here, for the first time, we systematically compared human feed-forward vision and DCNNs at view-invariant object recognition task using the same set of images and controlling the kinds of transformation (position, scale, rotation in plane, and rotation in depth) as well as their magnitude, which we call "variation level." We used four object categories: car, ship, motorcycle, and animal. In total, 89 human subjects participated in 10 experiments in which they had to discriminate between two or four categories after rapid presentation with backward masking. We also tested two recent DCNNs (proposed respectively by Hinton's group and Zisserman's group) on the same tasks. We found that humans and DCNNs largely agreed on the relative difficulties of each kind of variation: rotation in depth is by far the hardest transformation to handle, followed by scale, then rotation in plane, and finally position (much easier). This suggests that DCNNs would be reasonable models of human feed-forward vision. In addition, our results show that the variation levels in rotation in depth and scale strongly modulate both humans' and DCNNs' recognition performances. We thus argue that these variations should be controlled in the image datasets used in vision research.
NASA Astrophysics Data System (ADS)
Paar, G.
2009-04-01
At present, mainly the US have realized planetary space missions with essential robotics background. Joining institutions, companies and universities from different established groups in Europe and two relevant players from the US, the EC FP7 Project PRoVisG started in autumn 2008 to demonstrate the European ability of realizing high-level processing of robotic vision image products from the surface of planetary bodies. PRoVisG will build a unified European framework for Robotic Vision Ground Processing. State-of-art computer vision technology will be collected inside and outside Europe to better exploit the image data gathered during past, present and future robotic space missions to the Moon and the Planets. This will lead to a significant enhancement of the scientific, technologic and educational outcome of such missions. We report on the main PRoVisG objectives and the development status: - Past, present and future planetary robotic mission profiles are analysed in terms of existing solutions and requirements for vision processing - The generic processing chain is based on unified vision sensor descriptions and processing interfaces. Processing components available at the PRoVisG Consortium Partners will be completed by and combined with modules collected within the international computer vision community in the form of Announcements of Opportunity (AOs). - A Web GIS is developed to integrate the processing results obtained with data from planetary surfaces into the global planetary context. - Towards the end of the 39 month project period, PRoVisG will address the public by means of a final robotic field test in representative terrain. The European tax payers will be able to monitor the imaging and vision processing in a Mars - similar environment, thus getting an insight into the complexity and methods of processing, the potential and decision making of scientific exploitation of such data and not least the elegancy and beauty of the resulting image products and their visualization. - The educational aspect is addressed by two summer schools towards the end of the project, presenting robotic vision to the students who are future providers of European science and technology, inside and outside the space domain.
Computer Vision Tool and Technician as First Reader of Lung Cancer Screening CT Scans.
Ritchie, Alexander J; Sanghera, Calvin; Jacobs, Colin; Zhang, Wei; Mayo, John; Schmidt, Heidi; Gingras, Michel; Pasian, Sergio; Stewart, Lori; Tsai, Scott; Manos, Daria; Seely, Jean M; Burrowes, Paul; Bhatia, Rick; Atkar-Khattra, Sukhinder; van Ginneken, Bram; Tammemagi, Martin; Tsao, Ming Sound; Lam, Stephen
2016-05-01
To implement a cost-effective low-dose computed tomography (LDCT) lung cancer screening program at the population level, accurate and efficient interpretation of a large volume of LDCT scans is needed. The objective of this study was to evaluate a workflow strategy to identify abnormal LDCT scans in which a technician assisted by computer vision (CV) software acts as a first reader with the aim to improve speed, consistency, and quality of scan interpretation. Without knowledge of the diagnosis, a technician reviewed 828 randomly batched scans (136 with lung cancers, 556 with benign nodules, and 136 without nodules) from the baseline Pan-Canadian Early Detection of Lung Cancer Study that had been annotated by the CV software CIRRUS Lung Screening (Diagnostic Image Analysis Group, Nijmegen, The Netherlands). The scans were classified as either normal (no nodules ≥1 mm or benign nodules) or abnormal (nodules or other abnormality). The results were compared with the diagnostic interpretation by Pan-Canadian Early Detection of Lung Cancer Study radiologists. The overall sensitivity and specificity of the technician in identifying an abnormal scan were 97.8% (95% confidence interval: 96.4-98.8) and 98.0% (95% confidence interval: 89.5-99.7), respectively. Of the 112 prevalent nodules that were found to be malignant in follow-up, 92.9% were correctly identified by the technician plus CV compared with 84.8% by the study radiologists. The average time taken by the technician to review a scan after CV processing was 208 ± 120 seconds. Prescreening CV software and a technician as first reader is a promising strategy for improving the consistency and quality of screening interpretation of LDCT scans. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Visual function at altitude under night vision assisted conditions.
Vecchi, Diego; Morgagni, Fabio; Guadagno, Anton G; Lucertini, Marco
2014-01-01
Hypoxia, even mild, is known to produce negative effects on visual function, including decreased visual acuity and sensitivity to contrast, mostly in low light. This is of special concern when night vision devices (NVDs) are used during flight because they also provide poor images in terms of resolution and contrast. While wearing NVDs in low light conditions, 16 healthy male aviators were exposed to a simulated altitude of 12,500 ft in a hypobaric chamber. Snellen visual acuity decreased in normal light from 28.5 +/- 4.2/20 (normoxia) to 37.2 +/- 7.4/20 (hypoxia) and, in low light, from 33.8 +/- 6.1/20 (normoxia) to 42.2 +/- 8.4/20 (hypoxia), both at a significant level. An association was found between blood oxygen saturation and visual acuity without significance. No changes occurred in terms of sensitivity to contrast. Our data demonstrate that mild hypoxia is capable of affecting visual acuity and the photopic/high mesopic range of NVD-aided vision. This may be due to several reasons, including the sensitivity to hypoxia of photoreceptors and other retinal cells. Contrast sensitivity is possibly preserved under NVD-aided vision due to its dependency on the goggles' gain.
2014-08-12
Nolan Warner, Mubarak Shah. Tracking in Dense Crowds Using Prominenceand Neighborhood Motion Concurrence, IEEE Transactions on Pattern Analysis...of computer vision, computer graphics and evacuation dynamics by providing a common platform, and provides...areas that includes Computer Vision, Computer Graphics , and Pedestrian Evacuation Dynamics. Despite the
Computer vision syndrome: a review of ocular causes and potential treatments.
Rosenfield, Mark
2011-09-01
Computer vision syndrome (CVS) is the combination of eye and vision problems associated with the use of computers. In modern western society the use of computers for both vocational and avocational activities is almost universal. However, CVS may have a significant impact not only on visual comfort but also occupational productivity since between 64% and 90% of computer users experience visual symptoms which may include eyestrain, headaches, ocular discomfort, dry eye, diplopia and blurred vision either at near or when looking into the distance after prolonged computer use. This paper reviews the principal ocular causes for this condition, namely oculomotor anomalies and dry eye. Accommodation and vergence responses to electronic screens appear to be similar to those found when viewing printed materials, whereas the prevalence of dry eye symptoms is greater during computer operation. The latter is probably due to a decrease in blink rate and blink amplitude, as well as increased corneal exposure resulting from the monitor frequently being positioned in primary gaze. However, the efficacy of proposed treatments to reduce symptoms of CVS is unproven. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will enable practitioners to optimize visual comfort and efficiency during computer operation. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.
An Enduring Dialogue between Computational and Empirical Vision.
Martinez-Conde, Susana; Macknik, Stephen L; Heeger, David J
2018-04-01
In the late 1970s, key discoveries in neurophysiology, psychophysics, computer vision, and image processing had reached a tipping point that would shape visual science for decades to come. David Marr and Ellen Hildreth's 'Theory of edge detection', published in 1980, set out to integrate the newly available wealth of data from behavioral, physiological, and computational approaches in a unifying theory. Although their work had wide and enduring ramifications, their most important contribution may have been to consolidate the foundations of the ongoing dialogue between theoretical and empirical vision science. Copyright © 2018 Elsevier Ltd. All rights reserved.
... this page: https://medlineplus.gov/lowvision.html MedlinePlus: Low Vision Tips We are sorry. MedlinePlus no longer maintains the For Low Vision Users page. You will still find health resources ...
A Model for Integrating Low Vision Services into Educational Programs.
ERIC Educational Resources Information Center
Jose, Randall T.; And Others
1988-01-01
A project integrating low-vision services into children's educational programs comprised four components: teacher training, functional vision evaluations for each child, a clinical examination by an optometrist, and follow-up visits with the optometrist to evaluate the prescribed low-vision aids. Educational implications of the project and project…
BI-sparsity pursuit for robust subspace recovery
Bian, Xiao; Krim, Hamid
2015-09-01
Here, the success of sparse models in computer vision and machine learning in many real-world applications, may be attributed in large part, to the fact that many high dimensional data are distributed in a union of low dimensional subspaces. The underlying structure may, however, be adversely affected by sparse errors, thus inducing additional complexity in recovering it. In this paper, we propose a bi-sparse model as a framework to investigate and analyze this problem, and provide as a result , a novel algorithm to recover the union of subspaces in presence of sparse corruptions. We additionally demonstrate the effectiveness ofmore » our method by experiments on real-world vision data.« less
Wong, Tien Y; Sun, Jennifer; Kawasaki, Ryo; Ruamviboonsuk, Paisan; Gupta, Neeru; Lansingh, Van Charles; Maia, Mauricio; Mathenge, Wanjiku; Moreker, Sunil; Muqit, Mahi M K; Resnikoff, Serge; Verdaguer, Juan; Zhao, Peiquan; Ferris, Frederick; Aiello, Lloyd P; Taylor, Hugh R
2018-05-24
Diabetes mellitus (DM) is a global epidemic and affects populations in both developing and developed countries, with differing health care and resource levels. Diabetic retinopathy (DR) is a major complication of DM and a leading cause of vision loss in working middle-aged adults. Vision loss from DR can be prevented with broad-level public health strategies, but these need to be tailored to a country's and population's resource setting. Designing DR screening programs, with appropriate and timely referral to facilities with trained eye care professionals, and using cost-effective treatment for vision-threatening levels of DR can prevent vision loss. The International Council of Ophthalmology Guidelines for Diabetic Eye Care 2017 summarize and offer a comprehensive guide for DR screening, referral and follow-up schedules for DR, and appropriate management of vision-threatening DR, including diabetic macular edema (DME) and proliferative DR, for countries with high- and low- or intermediate-resource settings. The guidelines include updated evidence on screening and referral criteria, the minimum requirements for a screening vision and retinal examination, follow-up care, and management of DR and DME, including laser photocoagulation and appropriate use of intravitreal anti-vascular endothelial growth factor inhibitors and, in specific situations, intravitreal corticosteroids. Recommendations for management of DR in patients during pregnancy and with concomitant cataract also are included. The guidelines offer suggestions for monitoring outcomes and indicators of success at a population level. Copyright © 2018 American Academy of Ophthalmology. All rights reserved.
Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search
Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.
2012-01-01
Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766
Peripheral vision of youths with low vision: motion perception, crowding, and visual search.
Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S
2012-08-24
Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.
Freeman, William R.; Van Natta, Mark L.; Jabs, Douglas; Sample, Pamela A.; Sadun, Alfredo A.; Thorne, Jennifer; Shah, Kayur H.; Holland, Gary N.
2008-01-01
Purpose To evaluate the prevalence and risk factors for vision loss in patients with clinical or immunologic AIDS without infectious retinitis. Design A prospective multicentered cohort study of patients with AIDS. Methods 1,351 patients (2,671 eyes) at 19 clinical trials centers diagnosed with AIDS but without major ocular complications of HIV. Standardized measurements of visual acuity, automated perimetry, and contrast sensitivity were analyzed and correlated with measurements of patients’ health and medical data relating to HIV infection. We evaluated correlations between vision function testing and HIV-related risk factors and medical testing. Results There were significant (p<0.05) associations between measures of decreasing vision function and indices of increasing disease severity including Karnofsky score and hemoglobin. A significant relationship was seen between low contrast sensitivity and decreasing levels of CD4+ T-cell count. Three percent of eyes had a visual acuity worse than 20/40 Snellen equivalents, which was significantly associated with a history of opportunistic infections and low Karnofsky score. When compared to external groups with normal vision, 39% of eyes had abnormal mean deviation on automated perimetry, 33% had abnormal pattern standard deviation, and 12% of eyes had low contrast sensitivity. Conclusions This study confirms that visual dysfunction is common in patients with AIDS but without retinitis. The most prevalent visual dysfunction is loss of visual field; nearly 40% of patients have some abnormal visual field. There is an association between general disease severity and less access to care and vision loss. The pathophysiology of this vision loss is unknown but is consistent with retinovascular disease or optic nerve disease. PMID:18191094
Computational models of human vision with applications
NASA Technical Reports Server (NTRS)
Wandell, B. A.
1985-01-01
Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.
Design of a reading test for low-vision image warping
NASA Astrophysics Data System (ADS)
Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. Shane
1993-08-01
NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision -- maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer- generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.
Design of a reading test for low vision image warping
NASA Technical Reports Server (NTRS)
Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. S.
1993-01-01
NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision - maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer-generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We will describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.
Deep learning-based artificial vision for grasp classification in myoelectric hands.
Ghazaei, Ghazal; Alameer, Ali; Degenaar, Patrick; Morgan, Graham; Nazarpour, Kianoush
2017-06-01
Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at [Formula: see text] intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. The classification accuracy in the offline tests reached [Formula: see text] for the seen and [Formula: see text] for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of [Formula: see text] in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb Ultra TM prosthetic hand and a motion control TM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to [Formula: see text]. In addition, we show that with training, subjects' performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.
Deep learning-based artificial vision for grasp classification in myoelectric hands
NASA Astrophysics Data System (ADS)
Ghazaei, Ghazal; Alameer, Ali; Degenaar, Patrick; Morgan, Graham; Nazarpour, Kianoush
2017-06-01
Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at {{5}\\circ} intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached 85 % for the seen and 75 % for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of 84 % in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to 88 % . In addition, we show that with training, subjects’ performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.
Effectiveness of Assistive Technologies for Low Vision Rehabilitation: A Systematic Review
ERIC Educational Resources Information Center
Jutai, Jeffrey W.; Strong, J. Graham; Russell-Minda, Elizabeth
2009-01-01
"Low vision" describes any condition of diminished vision that is uncorrectable by standard eyeglasses, contact lenses, medication, or surgery that disrupts a person's ability to perform common age-appropriate visual tasks. Examples of assistive technologies for vision rehabilitation include handheld magnifiers; electronic vision-enhancement…
Real-time field programmable gate array architecture for computer vision
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar
2001-01-01
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low-level image processing. The field programmable gate array (FPGA)-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and it is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on dedicated very- large-scale-integrated devices to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real-time performance are discussed. Some results are presented and discussed.
Schuster, Alexander K; Tesarz, Jonas; Rezapour, Jasmin; Beutel, Manfred E; Bertram, Bernd; Pfeiffer, Norbert
2018-01-01
Visual impairment (VI) is associated with a variety of comorbidities including physical and mental health in industrial countries. Our aim is to examine associations between self-reported impairment and depressive symptoms in the German population. The point prevalence of self-reported VI in Germany was computed using data from the German Health Interview and Examination Survey for adults from 2008 to 2011 ( N = 7.783, 50.5% female, age range 18-79 years). VI was surveyed by two questions, one for seeing faces at a distance of 4 m and one for reading newspapers. Depressive symptoms were evaluated with the Patient Health Questionnaire-9 questionnaire and 2-week prevalence was computed with weighted data. Depressive symptoms were defined by a value of ≥10. Logistic regression analysis was performed to analyze an association between self-reported VI and depressive symptoms. Multivariable analysis including adjustment for age, gender, socioeconomic status, and chronic diseases were carried out with weighted data. The 2-week prevalence of depressive symptoms was 20.8% (95% CI: 16.6-25.7%) for some difficulties in distance vision and 14.4% (95% CI: 7.5-25.9%) for severe difficulties in distance vision, while 17.0% (95% CI: 13.3-21.4%), respectively, 16.7% (95% CI: 10.7-25.1%) for near vision. Analysis revealed that depressive symptoms were associated with self-reported VI for reading, respectively, with low VI for distance vision. Multivariable regression analysis including potential confounders confirmed these findings. Depressive symptoms are a frequent finding in subjects with difficulties in distance and near vision with a prevalence of up to 24%. Depressive comorbidity should therefore be evaluated in subjects reporting VI.
Parametric dense stereovision implementation on a system-on chip (SoC).
Gardel, Alfredo; Montejo, Pablo; García, Jorge; Bravo, Ignacio; Lázaro, José L
2012-01-01
This paper proposes a novel hardware implementation of a dense recovery of stereovision 3D measurements. Traditionally 3D stereo systems have imposed the maximum number of stereo correspondences, introducing a large restriction on artificial vision algorithms. The proposed system-on-chip (SoC) provides great performance and efficiency, with a scalable architecture available for many different situations, addressing real time processing of stereo image flow. Using double buffering techniques properly combined with pipelined processing, the use of reconfigurable hardware achieves a parametrisable SoC which gives the designer the opportunity to decide its right dimension and features. The proposed architecture does not need any external memory because the processing is done as image flow arrives. Our SoC provides 3D data directly without the storage of whole stereo images. Our goal is to obtain high processing speed while maintaining the accuracy of 3D data using minimum resources. Configurable parameters may be controlled by later/parallel stages of the vision algorithm executed on an embedded processor. Considering hardware FPGA clock of 100 MHz, image flows up to 50 frames per second (fps) of dense stereo maps of more than 30,000 depth points could be obtained considering 2 Mpix images, with a minimum initial latency. The implementation of computer vision algorithms on reconfigurable hardware, explicitly low level processing, opens up the prospect of its use in autonomous systems, and they can act as a coprocessor to reconstruct 3D images with high density information in real time.
Computer vision syndrome-A common cause of unexplained visual symptoms in the modern era.
Munshi, Sunil; Varghese, Ashley; Dhar-Munshi, Sushma
2017-07-01
The aim of this study was to assess the evidence and available literature on the clinical, pathogenetic, prognostic and therapeutic aspects of Computer vision syndrome. Information was collected from Medline, Embase & National Library of Medicine over the last 30 years up to March 2016. The bibliographies of relevant articles were searched for additional references. Patients with Computer vision syndrome present to a variety of different specialists, including General Practitioners, Neurologists, Stroke physicians and Ophthalmologists. While the condition is common, there is a poor awareness in the public and among health professionals. Recognising this condition in the clinic or in emergency situations like the TIA clinic is crucial. The implications are potentially huge in view of the extensive and widespread use of computers and visual display units. Greater public awareness of Computer vision syndrome and education of health professionals is vital. Preventive strategies should form part of work place ergonomics routinely. Prompt and correct recognition is important to allow management and avoid unnecessary treatments. © 2017 John Wiley & Sons Ltd.
Biswas, N R; Nainiwal, S K; Das, G K; Langan, U; Dadeya, S C; Mongre, P K; Ravi, A K; Baidya, P
2003-03-01
A comparative randomised double masked multicentric clinical trial has been conducted to find out the efficacy and safety of a herbal eye drop preparation, itone eye drops with artificial tear and placebo in 120 patients with computer vision syndrome. Patients using computer for at least 2 hours continuosly per day having symptoms of irritation, foreign body sensation, watering, redness, headache, eyeache and signs of conjunctival congestion, mucous/debris, corneal filaments, corneal staining or lacrimal lake were included in this study. Every patient was instructed to put two drops of either herbal drugs or placebo or artificial tear in the eyes regularly four times for 6 weeks. Objective and subjective findings were recorded at bi-weekly intervals up to six weeks. Side-effects, if any, were also noted. In computer vision syndrome the herbal eye drop preparation was found significantly better than artificial tear (p < 0.01). No side-effects were noted by any of the drugs. Both subjective and objective improvements were observed in itone treated cases. So, itone can be considered as a useful drug in computer vision syndrome.
Mozart to Michelangelo: Software to Hone Your Students' Fine Arts Skills.
ERIC Educational Resources Information Center
Smith, Russell
2000-01-01
Describes 15 art and music computer software products for classroom use. "Best bets" (mostly secondary level) include Clearvue Inc.'s Art of Seeing, Sunburst Technology's Curious George Paint & Print Studio, Inspiration Software's Inspiration 6.0, Harmonic Vision's Music Ace 2, and Coda Music Technology's PrintMusic! 2000 and SmartMusic Studio.…
ERIC Educational Resources Information Center
Ardiel, Evan L.; Giles, Andrew C.; Yu, Alex J.; Lindsay, Theodore H.; Lockery, Shawn R.; Rankin, Catharine H.
2016-01-01
Habituation is a highly conserved phenomenon that remains poorly understood at the molecular level. Invertebrate model systems, like "Caenorhabditis elegans," can be a powerful tool for investigating this fundamental process. Here we established a high-throughput learning assay that used real-time computer vision software for behavioral…
Computer vision syndrome (CVS) - Thermographic Analysis
NASA Astrophysics Data System (ADS)
Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.
2017-01-01
The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.
Evaluation of an organic light-emitting diode display for precise visual stimulation.
Ito, Hiroyuki; Ogawa, Masaki; Sunaga, Shoji
2013-06-11
A new type of visual display for presentation of a visual stimulus with high quality was assessed. The characteristics of an organic light-emitting diode (OLED) display (Sony PVM-2541, 24.5 in.; Sony Corporation, Tokyo, Japan) were measured in detail from the viewpoint of its applicability to visual psychophysics. We found the new display to be superior to other display types in terms of spatial uniformity, color gamut, and contrast ratio. Changes in the intensity of luminance were sharper on the OLED display than those on a liquid crystal display. Therefore, such OLED displays could replace conventional cathode ray tube displays in vision research for high quality stimulus presentation. Benefits of using OLED displays in vision research were especially apparent in the fields of low-level vision, where precise control and description of the stimulus are needed, e.g., in mesopic or scotopic vision, color vision, and motion perception.
On-Chip Imaging of Schistosoma haematobium Eggs in Urine for Diagnosis by Computer Vision
Linder, Ewert; Grote, Anne; Varjo, Sami; Linder, Nina; Lebbad, Marianne; Lundin, Mikael; Diwan, Vinod; Hannuksela, Jari; Lundin, Johan
2013-01-01
Background Microscopy, being relatively easy to perform at low cost, is the universal diagnostic method for detection of most globally important parasitic infections. As quality control is hard to maintain, misdiagnosis is common, which affects both estimates of parasite burdens and patient care. Novel techniques for high-resolution imaging and image transfer over data networks may offer solutions to these problems through provision of education, quality assurance and diagnostics. Imaging can be done directly on image sensor chips, a technique possible to exploit commercially for the development of inexpensive “mini-microscopes”. Images can be transferred for analysis both visually and by computer vision both at point-of-care and at remote locations. Methods/Principal Findings Here we describe imaging of helminth eggs using mini-microscopes constructed from webcams and mobile phone cameras. The results show that an inexpensive webcam, stripped off its optics to allow direct application of the test sample on the exposed surface of the sensor, yields images of Schistosoma haematobium eggs, which can be identified visually. Using a highly specific image pattern recognition algorithm, 4 out of 5 eggs observed visually could be identified. Conclusions/Significance As proof of concept we show that an inexpensive imaging device, such as a webcam, may be easily modified into a microscope, for the detection of helminth eggs based on on-chip imaging. Furthermore, algorithms for helminth egg detection by machine vision can be generated for automated diagnostics. The results can be exploited for constructing simple imaging devices for low-cost diagnostics of urogenital schistosomiasis and other neglected tropical infectious diseases. PMID:24340107
Characteristic of low vision patients attending an eye hospital in eastern region of Nepal.
Labh, R K; Adhikari, P R; Karki, P; Singh, S K; Sitoula, R P
2015-01-01
Low vision is an important public health problem. To study the profile of low vision patients in a hospital of Nepal. Information related to the patients' profile, visual status, ocular disease and, low vision devices prescribed were obtained retrospectively from the records of 1,860 visually- impaired patients, regardless of the cause, presenting to the low vision department of the Biratnagar Eye Hospital, Biratnagar, Nepal, over a period of four years. These patients, after a comprehensive ocular examination, underwent low vision examination by an ophthalmologist and a low vision specialist. Of 1,860 patients, males comprised 1298 (70 %), while 562 (30 %) patients were female. Six hundred and one (32.3%) patients were of less than 20 years while 398(21.4%) were more than 60 years of age. Agriculture (500, 27 %), household work (341, 18 %) and students (308, 17 %) were the common occupations. Retinal diseases were the commonest cause of low vision. They were: macular disorders 408 (22 %), retinitis pigmentosa 226 (12.1 %) and other retinal causes 361 (19.4 %) (diabetic retinopathy, choroidal coloboma, post laser for retinal vasculitis and central retinal/branch retinal vein occlusion, healed macular chorioretinal scar secondary to retinochroiditis and choroiditis). Refractive error 215 (11.5 %), amblyopia 49 (2.6 %), optic atrophy 144 (7.8 %) and microphthalmos 105 (5.6 %) were the other causes. Uncorrected distance visual acuity was between 6/24 and 6/60 in 509 (27.4 %) and between 5/60 and PL in 1,327 (71.3 %) patients. Similarly, near visual acuity with vision better than 2.50 M (N 20) and worse than 2.50 M (N20) was present in 643(34.5%) and 1,217(65.5%) patients. About 67% and 54.5% of the patients had some improvement in their distance and near visual acuity with glasses and low vision aids. Distance spectacles 909 (49 %), near spectacles 106 (5.7 %), hand held magnifiers 78 (4 %) and telescopes 18 (1 %) were the optical devices prescribed. The prevalence of low vision is common among the people of the younger and older age groups. Retinal diseases are common among the causes for low vision. Adequate prescription and availability of low vision devices can improve the visual acuity. © NEPjOPH.
Milestones on the road to independence for the blind
NASA Astrophysics Data System (ADS)
Reed, Kenneth
1997-02-01
Ken will talk about his experiences as an end user of technology. Even moderate technological progress in the field of pattern recognition and artificial intelligence can be, often surprisingly, of great help to the blind. An example is the providing of portable bar code scanners so that a blind person knows what he is buying and what color it is. In this age of microprocessors controlling everything, how can a blind person find out what his VCR is doing? Is there some technique that will allow a blind musician to convert print music into midi files to drive a synthesizer? Can computer vision help the blind cross a road including predictions of where oncoming traffic will be located? Can computer vision technology provide spoken description of scenes so a blind person can figure out where doors and entrances are located, and what the signage on the building says? He asks 'can computer vision help me flip a pancake?' His challenge to those in the computer vision field is 'where can we go from here?'
A large-scale solar dynamics observatory image dataset for computer vision applications.
Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A
2017-01-01
The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.
Semiautonomous teleoperation system with vision guidance
NASA Astrophysics Data System (ADS)
Yu, Wai; Pretlove, John R. G.
1998-12-01
This paper describes the ongoing research work on developing a telerobotic system in Mechatronic Systems and Robotics Research group at the University of Surrey. As human operators' manual control of remote robots always suffer from reduced performance and difficulties in perceiving information from the remote site, a system with a certain level of intelligence and autonomy will help to solve some of these problems. Thus, this system has been developed for this purpose. It also serves as an experimental platform to test the idea of using the combination of human and computer intelligence in teleoperation and finding out the optimum balance between them. The system consists of a Polhemus- based input device, a computer vision sub-system and a graphical user interface which communicates the operator with the remote robot. The system description is given in this paper as well as the preliminary experimental results of the system evaluation.
A tangible programming tool for children to cultivate computational thinking.
Wang, Danli; Wang, Tingting; Liu, Zhen
2014-01-01
Game and creation are activities which have good potential for computational thinking skills. In this paper we present T-Maze, an economical tangible programming tool for children aged 5-9 to build computer programs in maze games by placing wooden blocks. Through the use of computer vision technology, T-Maze provides a live programming interface with real-time graphical and voice feedback. We conducted a user study with 7 children using T-Maze to play two levels of maze-escape games and create their own mazes. The results show that T-Maze is not only easy to use, but also has the potential to help children cultivate computational thinking like abstraction, problem decomposition, and creativity.
Are children with low vision adapted to the visual environment in classrooms of mainstream schools?
Negiloni, Kalpa; Ramani, Krishna Kumar; Jeevitha, R; Kalva, Jayashree; Sudhir, Rachapalle Reddi
2018-01-01
Purpose: The study aimed to evaluate the classroom environment of children with low vision and provide recommendations to reduce visual stress, with focus on mainstream schooling. Methods: The medical records of 110 children (5–17 years) seen in low vision clinic during 1 year period (2015) at a tertiary care center in south India were extracted. The visual function levels of children were compared to the details of their classroom environment. The study evaluated and recommended the chalkboard visual task size and viewing distance required for children with mild, moderate, and severe visual impairment (VI). Results: The major causes of low vision based on the site of abnormality and etiology were retinal (80%) and hereditary (67%) conditions, respectively, in children with mild (n = 18), moderate (n = 72), and severe (n = 20) VI. Many of the children (72%) had difficulty in viewing chalkboard and common strategies used for better visibility included copying from friends (47%) and going closer to chalkboard (42%). To view the chalkboard with reduced visual stress, a child with mild VI can be seated at a maximum distance of 4.3 m from the chalkboard, with the minimum size of visual task (height of lowercase letter writing on chalkboard) recommended to be 3 cm. For 3/60–6/60 range, the maximum viewing distance with the visual task size of 4 cm is recommended to be 85 cm to 1.7 m. Conclusion: Simple modifications of the visual task size and seating arrangements can aid children with low vision with better visibility of chalkboard and reduced visual stress to manage in mainstream schools. PMID:29380777
[Computer eyeglasses--aspects of a confusing topic].
Huber-Spitzy, V; Janeba, E
1997-01-01
With the coming into force of the new Austrian Employee Protection Act the issue of the so called "computer glasses" will also gain added importance in our country. Such glasses have been defined as vision aids to be exclusively used for the work on computer monitors and include single-vision glasses solely intended for reading computer screen, glasses with bifocal lenses for reading computer screen and hard-copy documents as well as those with varifocal lenses featuring a thickened central section. There is still a considerable controversy among those concerned as to who will bear the costs for such glasses--most likely it will be the employer. Prescription of such vision aids will be exclusively restricted to ophthalmologists, based on a thorough ophthalmological examination under adequate consideration of the specific working environment and the workplace requirements of the individual employee concerned.
Non-surgical interventions for convergence insufficiency
Scheiman, Mitchell; Gwiazda, Jane; Li, Tianjing
2014-01-01
Background Convergence insufficiency is a common eye muscle co-ordination problem in which the eyes have a strong tendency to drift outward (exophoria) when reading or doing close work. Symptoms may include eye strain, headaches, double vision, print moving on the page, frequent loss of place when reading, inability to concentrate, and short attention span. Objectives To systematically assess and synthesize evidence from randomized controlled trials (RCTs) on the effectiveness of non-surgical interventions for convergence insufficiency. Search strategy We searched The Cochrane Library, MEDLINE, EMBASE, Science Citation Index, the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com) and ClinicalTrials.gov (www.clinicaltrials.gov) on 7 October 2010. We manually searched reference lists and optometric journals. Selection criteria We included RCTs examining any form of non-surgical intervention against placebo, no treatment, sham treatment, or each other. Data collection and analysis Two authors independently assessed eligibility, risk of bias, and extracted data. We performed meta-analyses when appropriate. Main results We included six trials (three in children, three in adults) with a total of 475 participants. We graded four trials at low risk of bias. Evidence from one trial (graded at low risk of bias) suggests that base-in prism reading glasses was no more effective than placebo reading glasses in improving clinical signs or symptoms in children. Evidence from one trial (graded at high risk of bias) suggests that base-in prism glasses using a progressive addition lens design was more effective than progressive addition lens alone in decreasing symptoms in adults. At three weeks of therapy, the mean difference in Convergence Insufficiency Symptoms Survey (CISS) score was −10.24 points (95% confidence interval (CI) −15.45 to −5.03). Evidence from two trials (graded at low risk of bias) suggests that outpatient (or office-based as used in the US) vision therapy/orthoptics was more effective than home-based convergence exercises (or pencil push-ups as used in the US) in children. At 12 weeks of therapy, the mean difference in change in near point of convergence, positive fusional vergence, and CISS score from baseline was 3.99 cm (95% CI 2.11 to 5.86), 13.13 diopters (95% CI 9.91 to 16.35), and 9.86 points (95% CI 6.70 to 13.02), respectively. In a young adult population, evidence from one trial (graded at low risk of bias) suggests outpatient vision therapy/orthoptics was more effective than home-based convergence exercises in improving positive fusional vergence at near (7.7 diopters, 95% CI 0.82 to 14.58), but not the other outcomes. Evidence from one trial (graded at low risk of bias) comparing four interventions, also suggests that outpatient vision therapy/orthoptics was more effective than home-based computer vision therapy/orthoptics in children. At 12 weeks, the mean difference in change in near point of convergence, positive fusional vergence, and CISS score from baseline was 2.90 cm (95% CI 0.96 to 4.84), 7.70 diopters (95% CI 3.94 to 11.46), and 8.80 points (95% CI 5.26 to 12.34), respectively. Evidence was less consistent for other pair-wise comparisons. Authors’ conclusions Current research suggests that outpatient vision therapy/orthoptics is more effective than home-based convergence exercises or home-based computer vision therapy/orthoptics for children. In adult population, evidence of the effectiveness of various non-surgical interventions is less consistent. PMID:21412896
Divilov, Konstantin; Wiesner-Hanks, Tyr; Barba, Paola; Cadle-Davidson, Lance; Reisch, Bruce I
2017-12-01
Quantitative phenotyping of downy mildew sporulation is frequently used in plant breeding and genetic studies, as well as in studies focused on pathogen biology such as chemical efficacy trials. In these scenarios, phenotyping a large number of genotypes or treatments can be advantageous but is often limited by time and cost. We present a novel computational pipeline dedicated to estimating the percent area of downy mildew sporulation from images of inoculated grapevine leaf discs in a manner that is time and cost efficient. The pipeline was tested on images from leaf disc assay experiments involving two F 1 grapevine families, one that had glabrous leaves (Vitis rupestris B38 × 'Horizon' [RH]) and another that had leaf trichomes (Horizon × V. cinerea B9 [HC]). Correlations between computer vision and manual visual ratings reached 0.89 in the RH family and 0.43 in the HC family. Additionally, we were able to use the computer vision system prior to sporulation to measure the percent leaf trichome area. We estimate that an experienced rater scoring sporulation would spend at least 90% less time using the computer vision system compared with the manual visual method. This will allow more treatments to be phenotyped in order to better understand the genetic architecture of downy mildew resistance and of leaf trichome density. We anticipate that this computer vision system will find applications in other pathosystems or traits where responses can be imaged with sufficient contrast from the background.
How to Make Low Vision "Sexy": A Starting Point for Interdisciplinary Student Recruitment
ERIC Educational Resources Information Center
Wittich, Walter; Strong, Graham; Renaud, Judith; Southall, Kenneth
2007-01-01
Professionals in the field of low vision are increasingly concerned about the paucity of optometry students who are expressing any interest in low vision as a clinical subspecialty. Concurrent with this apparent disinterest is an increased demand for these services as the baby boomer population becomes more predisposed to age-related vision loss.…
Family Functioning and Low Vision: A Systematic Review
ERIC Educational Resources Information Center
Bambara, Jennifer K.; Wadley, Virginia; Owsley, Cynthia; Martin, Roy C.; Porter, Chebon; Dreer, Laura E.
2009-01-01
This review highlights the literature on the function and adjustment process of family members of persons with adult-onset vision loss. The majority of the literature has focused on the unique role that the family plays in providing both instrumental and emotional support to adults with low vision. In contrast, the impact of low vision on the…
Detection and Tracking of Moving Objects with Real-Time Onboard Vision System
NASA Astrophysics Data System (ADS)
Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.
2017-05-01
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
Cappagli, Giulia; Finocchietti, Sara; Cocchi, Elena; Gori, Monica
2017-01-01
The specific role of early visual deprivation on spatial hearing is still unclear, mainly due to the difficulty of comparing similar spatial skills at different ages and to the difficulty in recruiting young blind children from birth. In this study, the effects of early visual deprivation on the development of auditory spatial localization have been assessed in a group of seven 3–5 years old children with congenital blindness (n = 2; light perception or no perception of light) or low vision (n = 5; visual acuity range 1.1–1.7 LogMAR), with the main aim to understand if visual experience is fundamental to the development of specific spatial skills. Our study led to three main findings: firstly, totally blind children performed overall more poorly compared sighted and low vision children in all the spatial tasks performed; secondly, low vision children performed equally or better than sighted children in the same auditory spatial tasks; thirdly, higher residual levels of visual acuity are positively correlated with better spatial performance in the dynamic condition of the auditory localization task indicating that the more residual vision the better spatial performance. These results suggest that early visual experience has an important role in the development of spatial cognition, even when the visual input during the critical period of visual calibration is partially degraded like in the case of low vision children. Overall these results shed light on the importance of early assessment of spatial impairments in visually impaired children and early intervention to prevent the risk of isolation and social exclusion. PMID:28443040
Visual Advantage of Enhanced Flight Vision System During NextGen Flight Test Evaluation
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Harrison, Stephanie J.; Bailey, Randall E.; Shelton, Kevin J.; Ellis, Kyle K.
2014-01-01
Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.
ERIC Educational Resources Information Center
Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L.; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri
2015-01-01
Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental…
Robot Acting on Moving Bodies (RAMBO): Interaction with tumbling objects
NASA Technical Reports Server (NTRS)
Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madhu; Harwood, David
1989-01-01
Interaction with tumbling objects will become more common as human activities in space expand. Attempting to interact with a large complex object translating and rotating in space, a human operator using only his visual and mental capacities may not be able to estimate the object motion, plan actions or control those actions. A robot system (RAMBO) equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a tumbling object, is being developed. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations rearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enhancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using dynamic interpolations between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.
A design approach for small vision-based autonomous vehicles
NASA Astrophysics Data System (ADS)
Edwards, Barrett B.; Fife, Wade S.; Archibald, James K.; Lee, Dah-Jye; Wilde, Doran K.
2006-10-01
This paper describes the design of a small autonomous vehicle based on the Helios computing platform, a custom FPGA-based board capable of supporting on-board vision. Target applications for the Helios computing platform are those that require lightweight equipment and low power consumption. To demonstrate the capabilities of FPGAs in real-time control of autonomous vehicles, a 16 inch long R/C monster truck was outfitted with a Helios board. The platform provided by such a small vehicle is ideal for testing and development. The proof of concept application for this autonomous vehicle was a timed race through an environment with obstacles. Given the size restrictions of the vehicle and its operating environment, the only feasible on-board sensor is a small CMOS camera. The single video feed is therefore the only source of information from the surrounding environment. The image is then segmented and processed by custom logic in the FPGA that also controls direction and speed of the vehicle based on visual input.
On computer vision in wireless sensor networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Nina M.; Ko, Teresa H.
Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an imagemore » capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.« less
Computer Vision Malaria Diagnostic Systems-Progress and Prospects.
Pollak, Joseph Joel; Houri-Yafin, Arnon; Salpeter, Seth J
2017-01-01
Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.
1991-01-01
Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.
Cloud computing in medical imaging.
Kagadis, George C; Kloukinas, Christos; Moore, Kevin; Philbin, Jim; Papadimitroulas, Panagiotis; Alexakos, Christos; Nagy, Paul G; Visvikis, Dimitris; Hendee, William R
2013-07-01
Over the past century technology has played a decisive role in defining, driving, and reinventing procedures, devices, and pharmaceuticals in healthcare. Cloud computing has been introduced only recently but is already one of the major topics of discussion in research and clinical settings. The provision of extensive, easily accessible, and reconfigurable resources such as virtual systems, platforms, and applications with low service cost has caught the attention of many researchers and clinicians. Healthcare researchers are moving their efforts to the cloud, because they need adequate resources to process, store, exchange, and use large quantities of medical data. This Vision 20/20 paper addresses major questions related to the applicability of advanced cloud computing in medical imaging. The paper also considers security and ethical issues that accompany cloud computing.
Optical reading aids for children and young people with low vision.
Barker, Lucy; Thomas, Rachel; Rubin, Gary; Dahlmann-Noor, Annegret
2015-03-04
Low vision in childhood is a significant barrier to learning and development, particularly for reading and education. Optical low vision aids may be used to maximise the child's functional vision. The World Health Organization (WHO) has previously highlighted the importance of the use of low vision aids in managing children with visual impairment across the world. To assess the effect of optical low vision aids on reading in children and young people with low vision. We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2014, Issue 12), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to January 2015), EMBASE (January 1980 to January 2015), the Health Technology Assessment Programme (HTA) (www.hta.ac.uk/), the ISRCTN registry (www.isrctn.com/editAdvancedSearch), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 8 January 2015.We also used manual searching to check the references listed in retrieved articles. Manufacturers of low vision aids were contacted to request any information about studies or research regarding their products. We planned to include randomised controlled trials (RCTs) and quasi-RCTs where any optical low vision aid was compared to standard refractive correction in children and young people aged between 5 and 16 years of age with low vision as defined by the WHO. We planned to include within-person design studies where the order of presentation of devices was randomised. Two authors independently reviewed the search results for eligibility . No studies met the inclusion criteria for this review. There is a lack of good quality evidence regarding the use of optical low vision aids in children and young people. As such, no implications for practice can be drawn. We believe future research should include functional outcome measures such as reading speed, accuracy and comprehension, as well as the effect of low vision aids on quality of life, in order to truly assess and compare the effect of these devices on a child's life and development.
... highest possible level. • Prevent Accidents and Injury: Recommending lighting that will be most effective for a particular ... other equipment contrasts with the wall. Maintain good lighting in walkways, hallways, stairwells, etc. Use labeled tray ...
Night myopia is reduced in binocular vision.
Chirre, Emmanuel; Prieto, Pedro M; Schwarz, Christina; Artal, Pablo
2016-06-01
Night myopia, which is a shift in refraction with light level, has been widely studied but still lacks a complete understanding. We used a new infrared open-view binocular Hartmann-Shack wave front sensor to quantify night myopia under monocular and natural binocular viewing conditions. Both eyes' accommodative response, aberrations, pupil diameter, and convergence were simultaneously measured at light levels ranging from photopic to scotopic conditions to total darkness. For monocular vision, reducing the stimulus luminance resulted in a progression of the accommodative state that tends toward the subject's dark focus or tonic accommodation and a change in convergence following the induced accommodative error. Most subjects presented a myopic shift of accommodation that was mitigated in binocular vision. The impact of spherical aberration on the focus shift was relatively small. Our results in monocular conditions support the hypothesis that night myopia has an accommodative origin as the eye progressively changes its accommodation state with decreasing luminance toward its resting state in total darkness. On the other hand, binocularity restrains night myopia, possibly by using fusional convergence as an additional accommodative cue, thus reducing the potential impact of night myopia on vision at low light levels.
1994-02-15
0. Faugeras. Three dimensional vision, a geometric viewpoint. MIT Press, 1993. [19] 0 . D. Faugeras and S. Maybank . Motion from point mathces...multiplicity of solutions. Int. J. of Computer Vision, 1990. 1201 0.D. Faugeras, Q.T. Luong, and S.J. Maybank . Camera self-calibration: theory and...Kalrnan filter-based algorithms for estimating depth from image sequences. Int. J. of computer vision, 1989. [41] S. Maybank . Theory of
Computational Vision: A Critical Review
1989-10-01
Optic News, 15:9-25, 1989. [8] H. B . Barlow and R. W. Levick . The mechanism of directional selectivity in the rabbit’s retina. J. Physiol., 173:477...comparison, other formulations, e.g., [64], used 16 @V A \\E(t=t2) (a) \\ E(t-tl) ( b ) Figure 7: An illustration of the aperture problem. Left: a bar E is...Ballard and C. M. Brown. Computer Vision. Prentice-Hall, Englewood Cliffs, NJ, 1982. [7] D. H. Ballard, R. C. Nelson, and B . Yamauchi. Animate vision
Marking parts to aid robot vision
NASA Technical Reports Server (NTRS)
Bales, J. W.; Barker, L. K.
1981-01-01
The premarking of parts for subsequent identification by a robot vision system appears to be beneficial as an aid in the automation of certain tasks such as construction in space. A simple, color coded marking system is presented which allows a computer vision system to locate an object, calculate its orientation, and determine its identity. Such a system has the potential to operate accurately, and because the computer shape analysis problem has been simplified, it has the ability to operate in real time.
Responsiveness of the EQ-5D to the effects of low vision rehabilitation.
Malkin, Alexis G; Goldstein, Judith E; Perlmutter, Monica S; Massof, Robert W
2013-08-01
This study is an evaluation of the responsiveness of preference-based outcome measures to the effects of low vision rehabilitation (LVR). It assesses LVR-related changes in EQ-5D utilities in patients who exhibit changes in Activity Inventory (AI) measures of visual ability. Telephone interviews were conducted on 77 low-vision patients out of a total of 764 patients in the parent study of "usual care" in LVR. Activity Inventory results were filtered for each patient to include only goals and tasks that would be targeted by LVR. The EQ-5D utilities have weak correlations with all AI measures but correlate best with AI goal scores at baseline (r = 0.48). Baseline goal scores are approximately normally distributed for the AI, but EQ-5D utilities at baseline are skewed toward the ceiling (median, 0.77). Effect size for EQ-5D utility change scores from pre- to post-LVR was not significantly different from zero. The AI visual function ability change scores corresponded to a moderate effect size for all functional domains and a large effect size for visual ability measures estimated from AI goal ratings. This study found that the EQ-5D is unresponsive as an outcome measure for LVR and has poor sensitivity for discriminating low vision patients with different levels of ability.
Work-related health disorders among Saudi computer users.
Jomoah, Ibrahim M
2014-01-01
The present study was conducted to investigate the prevalence of musculoskeletal disorders and eye and vision complaints among the computer users of King Abdulaziz University (KAU), Saudi Arabian Airlines (SAUDIA), and Saudi Telecom Company (STC). Stratified random samples of the work stations and operators at each of the studied institutions were selected and the ergonomics of the work stations were assessed and the operators' health complaints were investigated. The average ergonomic score of the studied work station at STC, KAU, and SAUDIA was 81.5%, 73.3%, and 70.3, respectively. Most of the examined operators use computers daily for ≤ 7 hours, yet they had some average incidences of general complaints (e.g., headache, body fatigue, and lack of concentration) and relatively high level of incidences of eye and vision complaints and musculoskeletal complaints. The incidences of the complaints have been found to increase with the (a) decrease in work station ergonomic score, (b) progress of age and duration of employment, (c) smoking, (d) use of computers, (e) lack of work satisfaction, and (f) history of operators' previous ailments. It has been recommended to improve the ergonomics of the work stations, set up training programs, and conduct preplacement and periodical examinations for operators.
Work-Related Health Disorders among Saudi Computer Users
Jomoah, Ibrahim M.
2014-01-01
The present study was conducted to investigate the prevalence of musculoskeletal disorders and eye and vision complaints among the computer users of King Abdulaziz University (KAU), Saudi Arabian Airlines (SAUDIA), and Saudi Telecom Company (STC). Stratified random samples of the work stations and operators at each of the studied institutions were selected and the ergonomics of the work stations were assessed and the operators' health complaints were investigated. The average ergonomic score of the studied work station at STC, KAU, and SAUDIA was 81.5%, 73.3%, and 70.3, respectively. Most of the examined operators use computers daily for ≤ 7 hours, yet they had some average incidences of general complaints (e.g., headache, body fatigue, and lack of concentration) and relatively high level of incidences of eye and vision complaints and musculoskeletal complaints. The incidences of the complaints have been found to increase with the (a) decrease in work station ergonomic score, (b) progress of age and duration of employment, (c) smoking, (d) use of computers, (e) lack of work satisfaction, and (f) history of operators' previous ailments. It has been recommended to improve the ergonomics of the work stations, set up training programs, and conduct preplacement and periodical examinations for operators. PMID:25383379
Kontopantelis, Evangelos; Stevens, Richard John; Helms, Peter J; Edwards, Duncan; Doran, Tim; Ashcroft, Darren M
2018-02-28
UK primary care databases (PCDs) are used by researchers worldwide to inform clinical practice. These databases have been primarily tied to single clinical computer systems, but little is known about the adoption of these systems by primary care practices or their geographical representativeness. We explore the spatial distribution of clinical computing systems and discuss the implications for the longevity and regional representativeness of these resources. Cross-sectional study. English primary care clinical computer systems. 7526 general practices in August 2016. Spatial mapping of family practices in England in 2016 by clinical computer system at two geographical levels, the lower Clinical Commissioning Group (CCG, 209 units) and the higher National Health Service regions (14 units). Data for practices included numbers of doctors, nurses and patients, and area deprivation. Of 7526 practices, Egton Medical Information Systems (EMIS) was used in 4199 (56%), SystmOne in 2552 (34%) and Vision in 636 (9%). Great regional variability was observed for all systems, with EMIS having a stronger presence in the West of England, London and the South; SystmOne in the East and some regions in the South; and Vision in London, the South, Greater Manchester and Birmingham. PCDs based on single clinical computer systems are geographically clustered in England. For example, Clinical Practice Research Datalink and The Health Improvement Network, the most popular primary care databases in terms of research outputs, are based on the Vision clinical computer system, used by <10% of practices and heavily concentrated in three major conurbations and the South. Researchers need to be aware of the analytical challenges posed by clustering, and barriers to accessing alternative PCDs need to be removed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A wirelessly programmable actuation and sensing system for structural health monitoring
NASA Astrophysics Data System (ADS)
Long, James; Büyüköztürk, Oral
2016-04-01
Wireless sensor networks promise to deliver low cost, low power and massively distributed systems for structural health monitoring. A key component of these systems, particularly when sampling rates are high, is the capability to process data within the network. Although progress has been made towards this vision, it remains a difficult task to develop and program 'smart' wireless sensing applications. In this paper we present a system which allows data acquisition and computational tasks to be specified in Python, a high level programming language, and executed within the sensor network. Key features of this system include the ability to execute custom application code without firmware updates, to run multiple users' requests concurrently and to conserve power through adjustable sleep settings. Specific examples of sensor node tasks are given to demonstrate the features of this system in the context of structural health monitoring. The system comprises of individual firmware for nodes in the wireless sensor network, and a gateway server and web application through which users can remotely submit their requests.
Chiang, Peggy P C; Zheng, Yingfeng; Wong, Tien Y; Lamoureux, Ecosse L
2013-02-01
To quantify the eye disease-specific impact of unilateral and bilateral vision impairment (VI) on vision-specific functioning (VF). The Singapore Indian Eye population-based study. Ethnic Indians older than 40 years of age living in Singapore. Participants underwent standardized ophthalmic assessments for VI and blindness, defined using presenting visual acuity (United States definition). Sociodemographic data were recorded using a standardized questionnaire. Rasch analysis was used to validate the Visual Function Index 11 and to determine its psychometric properties. The major causes of VI (i.e., cataract, refractive error, age-related macular degeneration, diabetic retinopathy [DR], and glaucoma) were determined by ophthalmologists on examination. Multivariate linear regression analysis was performed to assess the impact of VI on the overall VF Rasch score. Vision-specific functioning. Three thousand three hundred ninety-six persons were analyzed. Participants with VI had a systematic reduction in VF score compared with those with normal vision in both eyes, ranging from -11.2% normal vision in one eye and low vision in the other eye (95% confidence interval [CI], -12.2% to -10.3%; P<0.001), to -12.7% blindness in one eye and normal vision in the other eye (CI, -15.1% to -10.4%; P<0.001), to -19.4% low vision in both eyes (CI, -20.8% to -18.1%; P<0.001), to -52.9% blindness in one eye and low vision in other eye (CI, -55.3% to -50.4%; P<0.001), to -77.2% blindness in both eyes (CI, -82.4% to 72.0%; P<0.001). The impact of VI on VF score varied across different major causes of vision loss, regardless of socioeconomic factors. Vision impairment attributed to cataract in one or both eyes had a significant decrease in VF score by 17.7% and 22.3%, respectively, compared with those with normal vision in both eyes (P<0.001). The impact of unilateral and bilateral VI on VF score was greater in participants with glaucoma (32.2% in unilateral cases and 35.9% in bilateral cases; P<0.001) and DR (29.4% in unilateral cases and 33.3% in bilateral cases; P<0.001). Vision impairment and major age-related eye diseases such as cataract, DR, and glaucoma are associated significantly with worse deterioration in VF, regardless of education level, literacy adequacy, or immigration pattern. Glaucoma and DR seemed to have a greater negative impact on VF score compared with cataract. This study highlights the importance of disease-specific interventions in reducing the adverse impact of VI on daily activities. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Unger, Jakob; Merhof, Dorit; Renner, Susanne
2016-11-16
Global Plants, a collaborative between JSTOR and some 300 herbaria, now contains about 2.48 million high-resolution images of plant specimens, a number that continues to grow, and collections that are digitizing their specimens at high resolution are allocating considerable recourses to the maintenance of computer hardware (e.g., servers) and to acquiring digital storage space. We here apply machine learning, specifically the training of a Support-Vector-Machine, to classify specimen images into categories, ideally at the species level, using the 26 most common tree species in Germany as a test case. We designed an analysis pipeline and classification system consisting of segmentation, normalization, feature extraction, and classification steps and evaluated the system in two test sets, one with 26 species, the other with 17, in each case using 10 images per species of plants collected between 1820 and 1995, which simulates the empirical situation that most named species are represented in herbaria and databases, such as JSTOR, by few specimens. We achieved 73.21% accuracy of species assignments in the larger test set, and 84.88% in the smaller test set. The results of this first application of a computer vision algorithm trained on images of herbarium specimens shows that despite the problem of overlapping leaves, leaf-architectural features can be used to categorize specimens to species with good accuracy. Computer vision is poised to play a significant role in future rapid identification at least for frequently collected genera or species in the European flora.
AstroCV: Astronomy computer vision library
NASA Astrophysics Data System (ADS)
González, Roberto E.; Muñoz, Roberto P.; Hernández, Cristian A.
2018-04-01
AstroCV processes and analyzes big astronomical datasets, and is intended to provide a community repository of high performance Python and C++ algorithms used for image processing and computer vision. The library offers methods for object recognition, segmentation and classification, with emphasis in the automatic detection and classification of galaxies.
The Efficacy of Using Synthetic Vision Terrain-Textured Images to Improve Pilot Situation Awareness
NASA Technical Reports Server (NTRS)
Uenking, Michael D.; Hughes, Monica F.
2002-01-01
The General Aviation Element of the Aviation Safety Program's Synthetic Vision Systems (SVS) Project is developing technology to eliminate low visibility induced General Aviation (GA) accidents. SVS displays present computer generated 3-dimensional imagery of the surrounding terrain on the Primary Flight Display (PFD) to greatly enhance pilot's situation awareness (SA), reducing or eliminating Controlled Flight into Terrain, as well as Low-Visibility Loss of Control accidents. SVS-conducted research is facilitating development of display concepts that provide the pilot with an unobstructed view of the outside terrain, regardless of weather conditions and time of day. A critical component of SVS displays is the appropriate presentation of terrain to the pilot. An experimental study is being conducted at NASA Langley Research Center (LaRC) to explore and quantify the relationship between the realism of the terrain presentation and resulting enhancements of pilot SA and performance. Composed of complementary simulation and flight test efforts, Terrain Portrayal for Head-Down Displays (TP-HDD) experiments will help researchers evaluate critical terrain portrayal concepts. The experimental effort is to provide data to enable design trades that optimize SVS applications, as well as develop requirements and recommendations to facilitate the certification process. In this part of the experiment a fixed based flight simulator was equipped with various types of Head Down flight displays, ranging from conventional round dials (typical of most GA aircraft) to glass cockpit style PFD's. The variations of the PFD included an assortment of texturing and Digital Elevation Model (DEM) resolution combinations. A test matrix of 10 terrain display configurations (in addition to the baseline displays) were evaluated by 27 pilots of various backgrounds and experience levels. Qualitative (questionnaires) and quantitative (pilot performance and physiological) data were collected during the experimental runs. This paper focuses on the experimental set-up and final physiological results of the TP-HDD simulation experiment. The physiological measures of skin temperature, heart rate, and muscle response, show a decreased engagement (while using the synthetic vision displays as compared to the baseline conventional display) of the sympathetic and somatic nervous system responses which, in turn, indicates a reduced level of mental workload. This decreased level of workload is expected to enable improvement in the pilot's situation and terrain awareness.
Image jitter enhances visual performance when spatial resolution is impaired.
Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko
2012-09-06
Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.
Survey of blindness and low vision in Egbedore, South-Western Nigeria.
Kolawole, O U; Ashaye, A O; Adeoti, C O; Mahmoud, A O
2010-01-01
Developing efficient and cost-effective eye care programmes for communities in Nigeria has been hampered by inadequate and inaccurate data on blindness and low vision. To determine the prevalence and causes of blindness and low vision among adults 50 years and older in South-Western Nigeria in order to develop viable eye care programme for the community. Twenty clusters of 60 subjects of age 50 years and older were selected by systematic random cluster sampling. Information was collected and ocular examinations were conducted on each consenting subject. Data were recorded in specially designed questionnaire and analysed using descriptive statistical methods. Out of the 1200 subjects enrolled for the study, 1183(98.6%) were interviewed and examined. Seventy five (6.3%)) of the 1183 subjects were bilaterally blind and 223(18.9%) had bilateral low vision according to WHO definition of blindness and low vision. Blindness was about 1.6 times commoner in men than women. Cataract, glaucoma and posterior segment disorders were major causes of bilateral blindness. Bilateral low vision was mainly due to cataract, refractive errors and posterior segment disorders. The prevalence of blindness and low vision in this study population was high. The main causes are avoidable. Elimination of avoidable blindness and low vision calls for attention and commitment from government and eye care workers in South Western Nigeria.
Modelling Subjectivity in Visual Perception of Orientation for Image Retrieval.
ERIC Educational Resources Information Center
Sanchez, D.; Chamorro-Martinez, J.; Vila, M. A.
2003-01-01
Discussion of multimedia libraries and the need for storage, indexing, and retrieval techniques focuses on the combination of computer vision and data mining techniques to model high-level concepts for image retrieval based on perceptual features of the human visual system. Uses fuzzy set theory to measure users' assessments and to capture users'…
New design environment for defect detection in web inspection systems
NASA Astrophysics Data System (ADS)
Hajimowlana, S. Hossain; Muscedere, Roberto; Jullien, Graham A.; Roberts, James W.
1997-09-01
One of the aims of industrial machine vision is to develop computer and electronic systems destined to replace human vision in the process of quality control of industrial production. In this paper we discuss the development of a new design environment developed for real-time defect detection using reconfigurable FPGA and DSP processor mounted inside a DALSA programmable CCD camera. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The system is targeted for web inspection but has the potential for broader application areas. We describe and show test results of the prototype system board, mounted inside a DALSA camera and discuss some of the algorithms currently simulated and implemented for web inspection applications.
Artificial intelligence, expert systems, computer vision, and natural language processing
NASA Technical Reports Server (NTRS)
Gevarter, W. B.
1984-01-01
An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.
Kim, Hyung Nam
2017-10-16
Twenty-five years after the Americans with Disabilities Act, there has still been a lack of advancement of accessibility in healthcare for people with visual impairments, particularly older adults with low vision. This study aims to advance understanding of how older adults with low vision obtain, process, and use health information and services, and to seek opportunities of information technology to support them. A convenience sample of 10 older adults with low vision participated in semi-structured phone interviews, which were audio-recorded and transcribed verbatim for analysis. Participants shared various concerns in accessing, understanding, and using health information, care services, and multimedia technologies. Two main themes and nine subthemes emerged from the analysis. Due to the concerns, older adults with low vision tended to fail to obtain the full range of all health information and services to meet their specific needs. Those with low vision still rely on residual vision such that multimedia-based information which can be useful, but it should still be designed to ensure its accessibility, usability, and understandability.
Software for Real-Time Analysis of Subsonic Test Shot Accuracy
2014-03-01
used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains
Anxiety and Charles Bonnet Syndrome
ERIC Educational Resources Information Center
Geueke, Anna; Morley, Michael G.; Morley, Katharine; Lorch, Alice; Jackson, MaryLou; Lambrou, Angeliki; Wenberg, June; Oteng-Amoako, Afua
2012-01-01
Introduction: Some persons with Charles Bonnet syndrome (CBS) suffer significant anxiety because of their visual hallucinations, while others do not. The aim of the study presented here was to compare levels of anxiety in persons with low vision with and without CBS. Methods: This retrospective study compared the level of anxiety in 31 persons…
A lightweight, inexpensive robotic system for insect vision.
Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex
2017-09-01
Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Self-Taught Low-Rank Coding for Visual Learning.
Li, Sheng; Li, Kang; Fu, Yun
2018-03-01
The lack of labeled data presents a common challenge in many computer vision and machine learning tasks. Semisupervised learning and transfer learning methods have been developed to tackle this challenge by utilizing auxiliary samples from the same domain or from a different domain, respectively. Self-taught learning, which is a special type of transfer learning, has fewer restrictions on the choice of auxiliary data. It has shown promising performance in visual learning. However, existing self-taught learning methods usually ignore the structure information in data. In this paper, we focus on building a self-taught coding framework, which can effectively utilize the rich low-level pattern information abstracted from the auxiliary domain, in order to characterize the high-level structural information in the target domain. By leveraging a high quality dictionary learned across auxiliary and target domains, the proposed approach learns expressive codings for the samples in the target domain. Since many types of visual data have been proven to contain subspace structures, a low-rank constraint is introduced into the coding objective to better characterize the structure of the given target set. The proposed representation learning framework is called self-taught low-rank (S-Low) coding, which can be formulated as a nonconvex rank-minimization and dictionary learning problem. We devise an efficient majorization-minimization augmented Lagrange multiplier algorithm to solve it. Based on the proposed S-Low coding mechanism, both unsupervised and supervised visual learning algorithms are derived. Extensive experiments on five benchmark data sets demonstrate the effectiveness of our approach.
Metal surface corrosion grade estimation from single image
NASA Astrophysics Data System (ADS)
Chen, Yijun; Qi, Lin; Sun, Huyuan; Fan, Hao; Dong, Junyu
2018-04-01
Metal corrosion can cause many problems, how to quickly and effectively assess the grade of metal corrosion and timely remediation is a very important issue. Typically, this is done by trained surveyors at great cost. Assisting them in the inspection process by computer vision and artificial intelligence would decrease the inspection cost. In this paper, we propose a dataset of metal surface correction used for computer vision detection and present a comparison between standard computer vision techniques by using OpenCV and deep learning method for automatic metal surface corrosion grade estimation from single image on this dataset. The test has been performed by classifying images and calculating the accuracy for the two different approaches.
Causes of visual impairment in children with low vision.
Shah, Mufarriq; Khan, Mirzaman; Khan, Muhammad Tariq; Khan, Mohammad Younas; Saeed, Nasir
2011-02-01
To determine the main causes of visual impairment in children with low vision. To assess the need of spectacles and low vision devices (LVDs) in children and to evaluate visual outcome after using their LVDs for far and near distance. Observational study. Khyber Institute of Ophthalmic Medical Sciences, Peshawar, Pakistan, from June 2006 to December 2007. The clinical record of 270 children with low vision age 4-16 years attending the Low Vision Clinic were included. All those children, aged 4-16 years, who had corrected visual acuity (VA) less than 6/18 in the better eye after medical or surgical treatment, were included in the study. WHO low vision criteria were used to classify into visually impaired, severe visually impaired and blind. Results were described as percentage frequencies. One hundred and eighty nine (70%) were males and 81 (30%) were females. The male to female ratio was 2.3:1. The main causes of visual impairment included nystagmus (15%), Stargardt's disease (14%), maculopathies (13%), myopic macular degeneration (11%) and oculocutaneous albinism (7%). The percentages of visually impaired, severe visually impaired and blind were 33.8%, 27.2% and 39.0% respectively. Spectacles were prescribed to 146 patients and telescopes were prescribed to 75 patients. Spectacles and telescope both were prescribed to 179 patients while Ocutech telescope was prescribed to 4 patients. Retinal diseases nystagmus and macular conditions were mainly responsible for low vision in children. Visually impaired children especially with hereditary/congenital ocular anomalies benefit from refraction and low vision services which facilitate vision enhancement and inclusive education.
Nguyen, Nhung X; Besch, Dorothea; Bartz-Schmidt, Karl; Gelisken, Faik; Trauzettel-Klosinski, Susanne
2007-12-01
The aim of the present study was to evaluate the power of magnification required, reading performance with low-vision aids and vision-related quality of life with reference to reading ability and ability to carry out day-to-day activities in patients after macular translocation. This study included 15 patients who had undergone macular translocation with 360-degree peripheral retinectomy. The mean length of follow-up was 19.2 +/- 10.8 months (median 11 months). At the final examination, the impact of visual impairment on reading ability and quality of life was assessed according to a modified 9-item questionnaire in conjunction with a comprehensive clinical examination, which included assessment of best corrected visual acuity (BCVA), the magnification power required for reading, use of low-vision aids and reading speed. Patients rated the extent to which low vision restricted their ability to read and participate in other activities that affect quality of life. Responses were scored on a scale of 1.0 (optimum self-evaluation) to 5.0 (very poor). In the operated eye, overall mean postoperative BCVA (distance) was not significantly better than mean preoperative BCVA (0.11 +/- 0.06 and 0.15 +/- 0.08, respectively; p = 0.53). However, 53% of patients reported a subjective increase in visual function after treatment. At the final visit, the mean magnification required was x 7.7 +/- 6.7. A total of 60% of patients needed optical magnifiers for reading and in 40% of patients closed-circuit TV systems were necessary. All patients were able to read newspaper print using adapted low-vision aids at a mean reading speed of 71 +/- 40 words per minute. Mean self-reported scores were 3.2 +/- 1.1 for reading, 2.5 +/- 0.7 for day-to-day activities and 2.7 +/- 3.0 for outdoor walking and using steps or stairs. Patients' levels of dependency were significantly correlated with scores for reading (p = 0.01), day-to-day activities (p < 0.001) and outdoor walking and using steps (p = 0.001). The evaluation of self-reported visual function and vision-related quality of life in patients after macular translocation is necessary to obtain detailed information on treatment effects. Our results indicated improvement in patients' subjective evaluations of visual function, without significant improvement in visual acuity. The postoperative clinical benefits of treatment coincide with subjective benefits in terms of reading ability, quality of life and patient satisfaction. Our study confirms the importance and efficiency of visual rehabilitation with aids for low vision after surgery.
Udeh, N N; Eze, B I; Onwubiko, S N; Arinze, O C; Onwasigwe, E N; Umeh, R E
2014-06-01
To assess eye care service utilization, and identify access barriers in a south-eastern Nigerian albino population. The study was a population-based, cross-sectional survey conducted in Enugu state between August, 2011 and January, 2012. Using the data base of the state's Albino Foundation and tailored awareness creation, persons living with albinism were identified and recruited at two study centres. Data on participants' socio-demographics, perception of vision, visual needs, previous eye examination and or low vision assessment, use of glasses or low vision devices were collected. Reasons for non-utilisation of available vision care services were also obtained. Descriptive and comparative statistics were performed. A p < 0.05 was considered statistically significant. The participants (n = 153; males 70; females 83; sex ratio: 1:1.1) were aged 23.46 + 10.44 SD years (range 6-60 years). Most--95.4 % of the participants had no previous low vision assessment and none--0.0% had used low vision device. Of the participants, 82.4% reported previous eye examination, 33.3% had not used spectacles previously, despite the existing need. Ignorance--88.9% and poor access--8.5% were the main barriers to uptake of vision care services. In Enugu, Nigeria, there is poor awareness and low utilization of vision care services among people with albinism. The identified barriers to vision care access are amenable to awareness creation and logistic change in the provision of appropriate vision care services.
Prevalence and causes of blindness and low vision among adults in Fiji.
Ramke, Jacqueline; Brian, Garry; Maher, Louise; Qalo Qoqonokana, Mundi; Szetu, John
2012-07-01
To estimate the prevalence and causes of blindness and low vision among adults aged ≥40 years in Fiji. Population-based cross-sectional study. Adults aged ≥40 years in Viti Levu, Fiji. A population-based cross-sectional survey used multistage cluster random sampling to identify 34 clusters of 40 people. A cause of vision loss was determined for each eye with presenting vision worse than 6/18. Blindness (better eye presenting vision worse than 6/60), low vision (better eye presenting vision worse than 6/18, but 6/60 or better). Of 1892 people enumerated, 1381 participated (73.0%). Adjusting sample data for ethnicity, gender, age and domicile, the prevalence of blindness was 2.6% (95% confidence interval 1.7, 3.4) and low vision was 7.2% (95% confidence interval 5.9, 8.6) among adults aged ≥40 years. On multivariate analysis, being ≥70 years was a risk factor for blindness, but ethnicity, gender and urban/rural domicile were not. Being Indo-Fijian, female and older were risk factors for vision impairment (better eye presenting vision worse than 6/18). Cataract was the most common cause of bilateral blindness (71.1%). Among participants with low vision, uncorrected refractive error caused 63.3% and cataract was responsible for 25.0%. Strategies that provide accessible cataract and refractive error services producing good quality outcomes will likely have the greatest impact on reducing vision impairment. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.
Aki, Esra; Atasavun, Songül; Kayihan, Holya
2008-06-01
Kinesthetic sense plays an important role in writing. Children with low vision lack sensory input from the environment given their loss of vision. This study assessed the effect of upper extremity kinesthetic sense on writing function in two groups, one of students with low vision (9 girls and 11 boys, 9.4 +/- 1.9 yr. of age) and one of sighted students (10 girls and 10 boys, 10.1 +/- 1.3 yr. of age). All participants were given the Kinesthesia Test and Jebsen Hand Function Test-Writing subtest. Students with low vision scored lower on kinesthetic perception and writing performance than sighted peers. The correlation between scores for writing performance and upper extremity kinesthetic sense in the two groups was significant (r = -.34). The probability of deficiencies in kinesthetic information in students with low vision must be remembered.
14 CFR 135.178 - Additional emergency equipment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... location if it is more practical because of low headroom; (ii) Next to each floor level passenger emergency...; and (iii) On each bulkhead or divider that prevents fore and aft vision along the passenger cabin, to...
14 CFR 135.178 - Additional emergency equipment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... location if it is more practical because of low headroom; (ii) Next to each floor level passenger emergency...; and (iii) On each bulkhead or divider that prevents fore and aft vision along the passenger cabin, to...
14 CFR 135.178 - Additional emergency equipment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... location if it is more practical because of low headroom; (ii) Next to each floor level passenger emergency...; and (iii) On each bulkhead or divider that prevents fore and aft vision along the passenger cabin, to...
14 CFR 135.178 - Additional emergency equipment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... location if it is more practical because of low headroom; (ii) Next to each floor level passenger emergency...; and (iii) On each bulkhead or divider that prevents fore and aft vision along the passenger cabin, to...
14 CFR 135.178 - Additional emergency equipment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... location if it is more practical because of low headroom; (ii) Next to each floor level passenger emergency...; and (iii) On each bulkhead or divider that prevents fore and aft vision along the passenger cabin, to...
Development of a battery of functional tests for low vision.
Dougherty, Bradley E; Martin, Scott R; Kelly, Corey B; Jones, Lisa A; Raasch, Thomas W; Bullimore, Mark A
2009-08-01
We describe the development and evaluation of a battery of tests of functional visual performance of everyday tasks intended to be suitable for assessment of low vision patients. The functional test battery comprises-Reading rate: reading aloud 20 unrelated words for each of four print sizes (8, 4, 2, & 1 M); Telephone book: finding a name and reading the telephone number; Medicine bottle label: reading the name and dosing; Utility bill: reading the due date and amount due; Cooking instructions: reading cooking time on a food package; Coin sorting: making a specified amount from coins placed on a table; Playing card recognition: identifying denomination and suit; and Face recognition: identifying expressions of printed, life-size faces at 1 and 3 m. All tests were timed except face and playing card recognition. Fourteen normally sighted and 24 low vision subjects were assessed with the functional test battery. Visual acuity, contrast sensitivity, and quality of life (National Eye Institute Visual Function Questionnaire 25 [NEI-VFQ 25]) were measured and the functional tests repeated. Subsequently, 23 low vision patients participated in a pilot randomized clinical trial with half receiving low vision rehabilitation and half a delayed intervention. The functional tests were administered at enrollment and 3 months later. Normally sighted subjects could perform all tasks but the proportion of trials performed correctly by the low vision subjects ranged from 35% for face recognition at 3 m, to 95% for the playing card identification. On average, low vision subjects performed three times slower than the normally sighted subjects. Timed tasks with a visual search component showed poorer repeatability. In the pilot clinical trial, low vision rehabilitation produced the greatest improvement for the medicine bottle and cooking instruction tasks. Performance of patients on these functional tests has been assessed. Some appear responsive to low vision rehabilitation.
ERIC Educational Resources Information Center
Reardon, A. W.; And Others
1993-01-01
Instructional materials on hypoglycemia, foot care, and exercise were developed and field tested with 98 diabetes patients who had low vision and/or low literacy. A pretest and posttest revealed an 81% reduction in wrong answers overall and a 72% reduction in wrong answers by a subset with low vision. (Author/DB)
Image enhancement filters significantly improve reading performance for low vision observers
NASA Technical Reports Server (NTRS)
Lawton, T. B.
1992-01-01
As people age, so do their photoreceptors; many photoreceptors in central vision stop functioning when a person reaches their late sixties or early seventies. Low vision observers with losses in central vision, those with age-related maculopathies, were studied. Low vision observers no longer see high spatial frequencies, being unable to resolve fine edge detail. We developed image enhancement filters to compensate for the low vision observer's losses in contrast sensitivity to intermediate and high spatial frequencies. The filters work by boosting the amplitude of the less visible intermediate spatial frequencies. The lower spatial frequencies. These image enhancement filters not only reduce the magnification needed for reading by up to 70 percent, but they also increase the observer's reading speed by 2-4 times. A summary of this research is presented.
Lumber Grading With A Computer Vision System
Richard W. Conners; Tai-Hoon Cho; Philip A. Araman
1989-01-01
Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...
Range Image Flow using High-Order Polynomial Expansion
2013-09-01
included as a default algorithm in the OpenCV library [2]. The research of estimating the motion between range images, or range flow, is much more...Journal of Computer Vision, vol. 92, no. 1, pp. 1‒31. 2. G. Bradski and A. Kaehler. 2008. Learning OpenCV : Computer Vision with the OpenCV Library
Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.
1983-08-15
obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey
A Tangible Programming Tool for Children to Cultivate Computational Thinking
Wang, Danli; Liu, Zhen
2014-01-01
Game and creation are activities which have good potential for computational thinking skills. In this paper we present T-Maze, an economical tangible programming tool for children aged 5–9 to build computer programs in maze games by placing wooden blocks. Through the use of computer vision technology, T-Maze provides a live programming interface with real-time graphical and voice feedback. We conducted a user study with 7 children using T-Maze to play two levels of maze-escape games and create their own mazes. The results show that T-Maze is not only easy to use, but also has the potential to help children cultivate computational thinking like abstraction, problem decomposition, and creativity. PMID:24719575
Relationship between writing skills and visual-motor control in low-vision students.
Atasavun Uysal, Songül; Aki, Esra
2012-08-01
The purpose of this study was to investigate the relationship between handwriting skills and visual motor control among students with low vision and to compare this with the performance of their normal sighted peers. 42 students with low vision and 26 normal sighted peers participated. The Bruininks-Oseretsky Motor Proficiency Test-Short Form (BOTMP-SF), Jebsen Taylor Hand Function Test's writing subtest, and a legibility assessment were administered. Significant differences were found between groups for students' writing speed, legibility, and visual motor control. Visual motor control was correlated both writing speed and legibility. Students with low vision had poorer handwriting performance, with lower legibility and slower writing speed. Writing performance time was related to visual motor control in students with low vision.
Quality Parameters of Six Cultivars of Blueberry Using Computer Vision
Celis Cofré, Daniela; Silva, Patricia; Enrione, Javier; Osorio, Fernando
2013-01-01
Background. Blueberries are considered an important source of health benefits. This work studied six blueberry cultivars: “Duke,” “Brigitta”, “Elliott”, “Centurion”, “Star,” and “Jewel”, measuring quality parameters such as °Brix, pH, moisture content using standard techniques and shape, color, and fungal presence obtained by computer vision. The storage conditions were time (0–21 days), temperature (4 and 15°C), and relative humidity (75 and 90%). Results. Significant differences (P < 0.05) were detected between fresh cultivars in pH, °Brix, shape, and color. However, the main parameters which changed depending on storage conditions, increasing at higher temperature, were color (from blue to red) and fungal presence (from 0 to 15%), both detected using computer vision, which is important to determine a shelf life of 14 days for all cultivars. Similar behavior during storage was obtained for all cultivars. Conclusion. Computer vision proved to be a reliable and simple method to objectively determine blueberry decay during storage that can be used as an alternative approach to currently used subjective measurements. PMID:26904598
Ma, Ji; Sun, Da-Wen; Qu, Jia-Huan; Liu, Dan; Pu, Hongbin; Gao, Wen-Hong; Zeng, Xin-An
2016-01-01
With consumer concerns increasing over food quality and safety, the food industry has begun to pay much more attention to the development of rapid and reliable food-evaluation systems over the years. As a result, there is a great need for manufacturers and retailers to operate effective real-time assessments for food quality and safety during food production and processing. Computer vision, comprising a nondestructive assessment approach, has the aptitude to estimate the characteristics of food products with its advantages of fast speed, ease of use, and minimal sample preparation. Specifically, computer vision systems are feasible for classifying food products into specific grades, detecting defects, and estimating properties such as color, shape, size, surface defects, and contamination. Therefore, in order to track the latest research developments of this technology in the agri-food industry, this review aims to present the fundamentals and instrumentation of computer vision systems with details of applications in quality assessment of agri-food products from 2007 to 2013 and also discuss its future trends in combination with spectroscopy.
Scene-aware joint global and local homographic video coding
NASA Astrophysics Data System (ADS)
Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.
2016-09-01
Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.
Reading aids for adults with low vision.
Virgili, Gianni; Acosta, Ruthy; Bentley, Sharon A; Giacomelli, Giovanni; Allcock, Claire; Evans, Jennifer R
2018-04-17
The purpose of low-vision rehabilitation is to allow people to resume or to continue to perform daily living tasks, with reading being one of the most important. This is achieved by providing appropriate optical devices and special training in the use of residual-vision and low-vision aids, which range from simple optical magnifiers to high-magnification video magnifiers. To assess the effects of different visual reading aids for adults with low vision. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2017, Issue 12); MEDLINE Ovid; Embase Ovid; BIREME LILACS, OpenGrey, the ISRCTN registry; ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). The date of the search was 17 January 2018. This review includes randomised and quasi-randomised trials that compared any device or aid used for reading to another device or aid in people aged 16 or over with low vision as defined by the study investigators. We did not compare low-vision aids with no low-vision aid since it is obviously not possible to measure reading speed, our primary outcome, in people that cannot read ordinary print. We considered reading aids that maximise the person's visual reading capacity, for example by increasing image magnification (optical and electronic magnifiers), augmenting text contrast (coloured filters) or trying to optimise the viewing angle or gaze position (such as prisms). We have not included studies investigating reading aids that allow reading through hearing, such as talking books or screen readers, or through touch, such as Braille-based devices and we did not consider rehabilitation strategies or complex low-vision interventions. We used standard methods expected by Cochrane. At least two authors independently assessed trial quality and extracted data. The primary outcome of the review was reading speed in words per minute. Secondary outcomes included reading duration and acuity, ease and frequency of use, quality of life and adverse outcomes. We graded the certainty of the evidence using GRADE. We included 11 small studies with a cross-over design (435 people overall), one study with two parallel arms (37 participants) and one study with three parallel arms (243 participants). These studies took place in the USA (7 studies), the UK (5 studies) and Canada (1 study). Age-related macular degeneration (AMD) was the most frequent cause of low vision, with 10 studies reporting 50% or more participants with the condition. Participants were aged 9 to 97 years in these studies, but most were older (the median average age across studies was 71 years). None of the studies were masked; otherwise we largely judged the studies to be at low risk of bias. All studies reported the primary outcome: results for reading speed. None of the studies measured or reported adverse outcomes.Reading speed may be higher with stand-mounted closed circuit television (CCTV) than with optical devices (stand or hand magnifiers) (low-certainty evidence, 2 studies, 92 participants). There was moderate-certainty evidence that reading duration was longer with the electronic devices and that they were easier to use. Similar results were seen for electronic devices with the camera mounted in a 'mouse'. Mixed results were seen for head-mounted devices with one study of 70 participants finding a mouse-based head-mounted device to be better than an optical device and another study of 20 participants finding optical devices better (low-certainty evidence). Low-certainty evidence from three studies (93 participants) suggested no important differences in reading speed, acuity or ease of use between stand-mounted and head-mounted electronic devices. Similarly, low-certainty evidence from one study of 100 participants suggested no important differences between a 9.7'' tablet computer and stand-mounted CCTV in reading speed, with imprecise estimates (other outcomes not reported).Low-certainty evidence showed little difference in reading speed in one study with 100 participants that added electronic portable devices to preferred optical devices. One parallel-arm study in 37 participants found low-certainty evidence of higher reading speed at one month if participants received a CCTV at the initial rehabilitation consultation instead of a standard low-vision aids prescription alone.A parallel-arm study including 243 participants with AMD found no important differences in reading speed, reading acuity and quality of life between prism spectacles and conventional spectacles. One study in 10 people with AMD found that reading speed with several overlay coloured filters was no better and possibly worse than with a clear filter (low-certainty evidence, other outcomes not reported). There is insufficient evidence supporting the use of a specific type of electronic or optical device for the most common profiles of low-vision aid users. However, there is some evidence that stand-mounted electronic devices may improve reading speeds compared with optical devices. There is less evidence to support the use of head-mounted or portable electronic devices; however, the technology of electronic devices may have improved since the studies included in this review took place, and modern portable electronic devices have desirable properties such as flexible use of magnification. There is no good evidence to support the use of filters or prism spectacles. Future research should focus on assessing sustained long-term use of each device and the effect of different training programmes on its use, combined with investigation of which patient characteristics predict performance with different devices, including some of the more costly electronic devices.
Development of embedded real-time and high-speed vision platform
NASA Astrophysics Data System (ADS)
Ouyang, Zhenxing; Dong, Yimin; Yang, Hua
2015-12-01
Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.
Bali, Jatinder; Navin, Neeraj; Thakur, Bali Renu
2007-01-01
To study the knowledge, attitude and practices (KAP) towards computer vision syndrome prevalent in Indian ophthalmologists and to assess whether 'computer use by practitioners' had any bearing on the knowledge and practices in computer vision syndrome (CVS). A random KAP survey was carried out on 300 Indian ophthalmologists using a 34-point spot-questionnaire in January 2005. All the doctors who responded were aware of CVS. The chief presenting symptoms were eyestrain (97.8%), headache (82.1%), tiredness and burning sensation (79.1%), watering (66.4%) and redness (61.2%). Ophthalmologists using computers reported that focusing from distance to near and vice versa (P =0.006, chi2 test), blurred vision at a distance (P =0.016, chi2 test) and blepharospasm (P =0.026, chi2 test) formed part of the syndrome. The main mode of treatment used was tear substitutes. Half of ophthalmologists (50.7%) were not prescribing any spectacles. They did not have any preference for any special type of glasses (68.7%) or spectral filters. Computer-users were more likely to prescribe sedatives/anxiolytics (P = 0.04, chi2 test), spectacles (P = 0.02, chi2 test) and conscious frequent blinking (P = 0.003, chi2 test) than the non-computer-users. All respondents were aware of CVS. Confusion regarding treatment guidelines was observed in both groups. Computer-using ophthalmologists were more informed of symptoms and diagnostic signs but were misinformed about treatment modalities.
Home Lighting Assessment for Clients With Low Vision
Bhorade, Anjali; Gordon, Mae; Hollingsworth, Holly; Engsberg, Jack E.; Baum, M. Carolyn
2013-01-01
OBJECTIVE. The goal was to develop an objective, comprehensive, near-task home lighting assessment for older adults with low vision. METHOD. A home lighting assessment was developed and tested with older adults with low vision. Interrater and test–retest reliability studies were conducted. Clinical utility was assessed by occupational therapists with expertise in low vision rehabilitation. RESULTS. Interrater reliability was high (intraclass correlation coefficient [ICC] = .83–1.0). Test–retest reliability was moderate (ICC = .67). Responses to a Clinical Utility Feedback Form developed for this study indicated that the Home Environment Lighting Assessment (HELA) has strong clinical utility. CONCLUSION. The HELA provides a structured tool to describe the quantitative and qualitative aspects of home lighting environments where near tasks are performed and can be used to plan lighting interventions. The HELA has the potential to affect assessment and intervention practices of rehabilitation professionals in the area of low vision and improve near-task performance of people with low vision. PMID:24195901
Gao, Guohong; Yu, Manrong; Dai, Jinhui; Xue, Feng; Wang, Xiaoying; Zou, Leilei; Chen, Minjie; Ma, Fei
2016-05-01
The aim was to describe the characteristics of the paediatric population attending the low vision clinic of the Eye and ENT Hospital, located in Shanghai, China. The clinical records of all the children attending the low vision clinic of Eye and ENT Hospital affiliated to Fudan University between January 1, 2009 and May 31, 2014 were retrospectively reviewed. The main data analysed were age, gender, education, visual demand, diagnosis, visual acuity and prescription of low vision aids. Of the 162 patients, 104 (64.20 per cent) were male. The age range of the study population was three to 20 years, with a mean of 10.73 ± 5.08 years. There were 43.21 per cent with moderate visual impairment, 26.54 per cent had severe visual impairment and 19.75 per cent were blind. The leading causes of visual impairment were congenital cataract (21.61 per cent), optic atrophy (14.20 per cent), macular dystrophy (11.73 per cent), nystagmus (9.88 per cent) and congenital glaucoma (9.26 per cent). The most frequently prescribed low vision devices for distant and near vision were binocular telescopes (23.57 per cent) and stand magnifiers (22.93 per cent), respectively. Young age (up to six years, 37.93 per cent), high cost (24.14 per cent), cosmetic reasons (17.24 per cent) and inconvenience (13.79 per cent) were the main reasons that children or parents refused to accept useful low vision aids. Congenital and hereditary diseases constituted the major causes of low vision in the study population. Strategies that make good-quality rehabilitation services available, affordable and accessible, especially in developing countries, will have the greatest impact on visual impairment. In China, both urban and rural, the coverage of low vision services should be strengthened. © 2016 The Authors. Clinical and Experimental Optometry © 2016 Optometry Australia.
Fusion of Multiple Sensing Modalities for Machine Vision
1994-05-31
Modeling of Non-Homogeneous 3-D Objects for Thermal and Visual Image Synthesis," Pattern Recognition, in press. U [11] Nair, Dinesh , and J. K. Aggarwal...20th AIPR Workshop: Computer Vision--Meeting the Challenges, McLean, Virginia, October 1991. Nair, Dinesh , and J. K. Aggarwal, "An Object Recognition...Computer Engineering August 1992 Sunil Gupta Ph.D. Student Mohan Kumar M.S. Student Sandeep Kumar M.S. Student Xavier Lebegue Ph.D., Computer
The Implications of Pervasive Computing on Network Design
NASA Astrophysics Data System (ADS)
Briscoe, R.
Mark Weiser's late-1980s vision of an age of calm technology with pervasive computing disappearing into the fabric of the world [1] has been tempered by an industry-driven vision with more of a feel of conspicuous consumption. In the modified version, everyone carries around consumer electronics to provide natural, seamless interactions both with other people and with the information world, particularly for eCommerce, but still through a pervasive computing fabric.
Use of 3D vision for fine robot motion
NASA Technical Reports Server (NTRS)
Lokshin, Anatole; Litwin, Todd
1989-01-01
An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.
Compensation for Blur Requires Increase in Field of View and Viewing Time
Kwon, MiYoung; Liu, Rong; Chien, Lillian
2016-01-01
Spatial resolution is an important factor for human pattern recognition. In particular, low resolution (blur) is a defining characteristic of low vision. Here, we examined spatial (field of view) and temporal (stimulus duration) requirements for blurry object recognition. The spatial resolution of an image such as letter or face, was manipulated with a low-pass filter. In experiment 1, studying spatial requirement, observers viewed a fixed-size object through a window of varying sizes, which was repositioned until object identification (moving window paradigm). Field of view requirement, quantified as the number of “views” (window repositions) for correct recognition, was obtained for three blur levels, including no blur. In experiment 2, studying temporal requirement, we determined threshold viewing time, the stimulus duration yielding criterion recognition accuracy, at six blur levels, including no blur. For letter and face recognition, we found blur significantly increased the number of views, suggesting a larger field of view is required to recognize blurry objects. We also found blur significantly increased threshold viewing time, suggesting longer temporal integration is necessary to recognize blurry objects. The temporal integration reflects the tradeoff between stimulus intensity and time. While humans excel at recognizing blurry objects, our findings suggest compensating for blur requires increased field of view and viewing time. The need for larger spatial and longer temporal integration for recognizing blurry objects may further challenge object recognition in low vision. Thus, interactions between blur and field of view should be considered for developing low vision rehabilitation or assistive aids. PMID:27622710
Riemann tensor of motion vision revisited.
Brill, M
2001-07-02
This note shows that the Riemann-space interpretation of motion vision developed by Barth and Watson is neither necessary for their results, nor sufficient to handle an intrinsic coordinate problem. Recasting the Barth-Watson framework as a classical velocity-solver (as in computer vision) solves these problems.
Evaluation of the Waggoner Computerized Color Vision Test.
Ng, Jason S; Self, Eriko; Vanston, John E; Nguyen, Andrew L; Crognale, Michael A
2015-04-01
Clinical color vision evaluation has been based primarily on the same set of tests for the past several decades. Recently, computer-based color vision tests have been devised, and these have several advantages but are still not widely used. In this study, we evaluated the Waggoner Computerized Color Vision Test (CCVT), which was developed for widespread use with common computer systems. A sample of subjects with (n = 59) and without (n = 361) color vision deficiency (CVD) were tested on the CCVT, the anomaloscope, the Richmond HRR (Hardy-Rand-Rittler) (4th edition), and the Ishihara test. The CCVT was administered in two ways: (1) on a computer monitor using its default settings and (2) on one standardized to a correlated color temperature (CCT) of 6500 K. Twenty-four subjects with CVD performed the CCVT both ways. Sensitivity, specificity, and correct classification rates were determined. The screening performance of the CCVT was good (95% sensitivity, 100% specificity). The CCVT classified subjects as deutan or protan in agreement with anomaloscopy 89% of the time. It generally classified subjects as having a more severe defect compared with other tests. Results from 18 of the 24 subjects with CVD tested under both default and calibrated CCT conditions were the same, whereas the results from 6 subjects had better agreement with other test results when the CCT was set. The Waggoner CCVT is an adequate color vision screening test with several advantages and appears to provide a fairly accurate diagnosis of deficiency type. Used in conjunction with other color vision tests, it may be a useful addition to a color vision test battery.
Ethical, environmental and social issues for machine vision in manufacturing industry
NASA Astrophysics Data System (ADS)
Batchelor, Bruce G.; Whelan, Paul F.
1995-10-01
Some of the ethical, environmental and social issues relating to the design and use of machine vision systems in manufacturing industry are highlighted. The authors' aim is to emphasize some of the more important issues, and raise general awareness of the need to consider the potential advantages and hazards of machine vision technology. However, in a short article like this, it is impossible to cover the subject comprehensively. This paper should therefore be seen as a discussion document, which it is hoped will provoke more detailed consideration of these very important issues. It follows from an article presented at last year's workshop. Five major topics are discussed: (1) The impact of machine vision systems on the environment; (2) The implications of machine vision for product and factory safety, the health and well-being of employees; (3) The importance of intellectual integrity in a field requiring a careful balance of advanced ideas and technologies; (4) Commercial and managerial integrity; and (5) The impact of machine visions technology on employment prospects, particularly for people with low skill levels.
Automated Ecological Assessment of Physical Activity: Advancing Direct Observation.
Carlson, Jordan A; Liu, Bo; Sallis, James F; Kerr, Jacqueline; Hipp, J Aaron; Staggs, Vincent S; Papa, Amy; Dean, Kelsey; Vasconcelos, Nuno M
2017-12-01
Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total) of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET)-minutes. Means and standard deviations (SD) of bias/difference values, and intraclass correlation coefficients (ICC) assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA) had small biases (within 20% of the criterion mean) and the ICCs were excellent (0.82-0.98). Total MET-minutes were slightly underestimated by 9.3-17.1% and the ICCs were good (0.68-0.79). The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals) and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings.
Automated Ecological Assessment of Physical Activity: Advancing Direct Observation
Carlson, Jordan A.; Liu, Bo; Sallis, James F.; Kerr, Jacqueline; Papa, Amy; Dean, Kelsey; Vasconcelos, Nuno M.
2017-01-01
Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total) of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET)-minutes. Means and standard deviations (SD) of bias/difference values, and intraclass correlation coefficients (ICC) assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA) had small biases (within 20% of the criterion mean) and the ICCs were excellent (0.82–0.98). Total MET-minutes were slightly underestimated by 9.3–17.1% and the ICCs were good (0.68–0.79). The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals) and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings. PMID:29194358
Efficacy of a Low Vision Patient Consultation
ERIC Educational Resources Information Center
Siemsen, Dennis W.; Bergstrom, A. Ren?e; Hathaway, Julie C.
2005-01-01
A variety of obstacles can prevent persons or individuals with low vision from deriving the greatest possible benefit from the rehabilitation process, including inadequate understanding of their visual impairment, lack of knowledge about available services, and misconceptions about low vision devices. This study explores the use of a…
The History and Future of Low Vision Services in the United States
ERIC Educational Resources Information Center
Mogk, Lylas; Goodrich, Gregory
2004-01-01
This article discusses the history of low vision services in the United States. The field began to gain momentum as the term "low vision" was conceptualized and coined, and this momentum is rapidly increasing with changes in the demographics of visual impairment.
NASA Astrophysics Data System (ADS)
Sedlar, F.; Turpin, E.; Kerkez, B.
2014-12-01
As megacities around the world continue to develop at breakneck speeds, future development, investment, and social wellbeing are threatened by a number of environmental and social factors. Chief among these is frequent, persistent, and unpredictable urban flooding. Jakarta, Indonesia with a population of 28 million, is a prime example of a city plagued by such flooding. Yet although Jakarta has ample hydraulic infrastructure already in place with more being constructed, the increasingly severity of the flooding it experiences is not from a lack of hydraulic infrastructure but rather a failure of existing infrastructure. As was demonstrated during the most recent floods in Jakarta, the infrastructure failure is often the result of excessive amounts of trash in the flood canals. This trash clogs pumps and reduces the overall system capacity. Despite this critical weakness of flood control in Jakarta, no data exists on the overall amount of trash in the flood canals, much less on how it varies temporally and spatially. The recent availability of low cost photography provides a means to obtain such data. Time lapse photography postprocessed with computer vision algorithms yields a low cost, remote, and automatic solution to measuring the trash fluxes. When combined with the measurement of key hydrological parameters, a thorough understanding of the relationship between trash fluxes and the hydrology of massive urban areas becomes possible. This work examines algorithm development, quantifying trash parameters, and hydrological measurements followed by data assimilation into existing hydraulic and hydrological models of Jakarta. The insights afforded from such an approach allows for more efficient operating of hydraulic infrastructure, knowledge of when and where critical levels of trash originate from, and the opportunity for community outreach - which is ultimately needed to reduce the trash in the flood canals of Jakarta and megacities around the world.
NASA Technical Reports Server (NTRS)
Lewandowski, Leon; Struckman, Keith
1994-01-01
Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.
Pixel level optical-transfer-function design based on the surface-wave-interferometry aperture
Zheng, Guoan; Wang, Yingmin; Yang, Changhuei
2010-01-01
The design of optical transfer function (OTF) is of significant importance for optical information processing in various imaging and vision systems. Typically, OTF design relies on sophisticated bulk optical arrangement in the light path of the optical systems. In this letter, we demonstrate a surface-wave-interferometry aperture (SWIA) that can be directly incorporated onto optical sensors to accomplish OTF design on the pixel level. The whole aperture design is based on the bull’s eye structure. It composes of a central hole (diameter of 300 nm) and periodic groove (period of 560 nm) on a 340 nm thick gold layer. We show, with both simulation and experiment, that different types of optical transfer functions (notch, highpass and lowpass filter) can be achieved by manipulating the interference between the direct transmission of the central hole and the surface wave (SW) component induced from the periodic groove. Pixel level OTF design provides a low-cost, ultra robust, highly compact method for numerous applications such as optofluidic microscopy, wavefront detection, darkfield imaging, and computational photography. PMID:20721038
Invariant visual object recognition and shape processing in rats
Zoccolan, Davide
2015-01-01
Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision. PMID:25561421
Expedient range enhanced 3-D robot colour vision
NASA Astrophysics Data System (ADS)
Jarvis, R. A.
1983-01-01
Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.
Head-Mounted Display Technology for Low Vision Rehabilitation and Vision Enhancement
Ehrlich, Joshua R.; Ojeda, Lauro V.; Wicker, Donna; Day, Sherry; Howson, Ashley; Lakshminarayanan, Vasudevan; Moroi, Sayoko E.
2017-01-01
Purpose To describe the various types of head-mounted display technology, their optical and human factors considerations, and their potential for use in low vision rehabilitation and vision enhancement. Design Expert perspective. Methods An overview of head-mounted display technology by an interdisciplinary team of experts drawing on key literature in the field. Results Head-mounted display technologies can be classified based on their display type and optical design. See-through displays such as retinal projection devices have the greatest potential for use as low vision aids. Devices vary by their relationship to the user’s eyes, field of view, illumination, resolution, color, stereopsis, effect on head motion and user interface. These optical and human factors considerations are important when selecting head-mounted displays for specific applications and patient groups. Conclusions Head-mounted display technologies may offer advantages over conventional low vision aids. Future research should compare head-mounted displays to commonly prescribed low vision aids in order to compare their effectiveness in addressing the impairments and rehabilitation goals of diverse patient populations. PMID:28048975
Gauchard, G C; Jeandel, C; Perrin, P P
2001-01-01
Ageing is associated with a reduction in balance, in particular through dysfunction of each level of postural control, which results in an increased risk of falling. Conversely, the practice of physical activities has been shown to modulate postural control in elderly people. This study examined the potential positive effects of two types of regular physical and sporting activities on vestibular information and their relation to posture. Gaze and postural stabilisation was evaluated by caloric and rotational vestibular tests on 18 healthy subjects over the age of 60 who regularly practised low-energy or bioenergetic physical activities and on 18 controls of a similar age who only walked on a regular basis. These subjects were also submitted to static and dynamic posturographic tests. The control group displayed less balance control, with a lower vestibular sensitivity and a relatively high dependency on vision compared to the group practising low-energy physical activities, which had better postural control with good vestibular sensitivity and less dependency on vision. The postural control and vestibular sensitivity of subjects practising bioenergetic activities was average, and required higher visual afferent contribution. Low-energy exercises, already shown to have the most positive impact on balance control by relying more on proprioception, also appear to develop or maintain a high level of vestibular sensitivity allowing elderly people practising such exercises to reduce the weight of vision. Copyright 2001 S. Karger AG, Basel
The Role of Education and Rehabilitation Specialists in the Comprehensive Low Vision Care Process.
ERIC Educational Resources Information Center
Lueck, A. H.
1997-01-01
Outlines the contributions of education and rehabilitation specialists in maximizing specific skills, self-esteem, and quality of life of individuals with low vision. The role of these specialists in evaluating functional vision, teaching methods to compensate for impaired vision, and addressing psychosocial concerns are discussed. (Author/CR)
Paediatric Low-Vision Assessment and Management in a Specialist Clinic in the UK
ERIC Educational Resources Information Center
Lennon, Julie; Harper, Robert; Biswas, Sus; Lloyd, Chris
2007-01-01
This article presents a survey of the demographical, educational and visual functional characteristics of children attending a specialist paediatric low-vision assessment clinic at Manchester Royal Eye Hospital. Comprehensive data were collected retrospectively from children attending the paediatric low-vision clinic between January 2003 and…
Economics of cutting hardwood dimension parts with an automated system
Henry A. Huber; Steve Ruddell; Kalinath Mukherjee; Charles W. McMillin
1989-01-01
A financial analysis using discounted cash-flow decision methods was completed to determine the economic feasibility of replacing a conventional roughmill crosscut and rip operation with a proposed automated computer vision and laser cutting system. Red oak and soft maple lumber were cut at production levels of 30 thousand board feet (MBF)/day and 5 MBF/day to produce...
Helicopter flights with night-vision goggles: Human factors aspects
NASA Technical Reports Server (NTRS)
Brickner, Michael S.
1989-01-01
Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques.
Vijaya, Lingam; George, Ronnie; Asokan, Rashima; Velumuri, Lokapavani; Ramesh, Sathyamangalam Ve
2014-04-01
To evaluate the prevalence and causes of low vision and blindness in an urban south Indian population. Population-based cross-sectional study. Exactly 3850 subjects aged 40 years and above from Chennai city were examined at a dedicated facility in the base hospital. All subjects had a complete ophthalmic examination that included best-corrected visual acuity. Low vision and blindness were defined using World Health Organization (WHO) criteria. The influence of age, gender, literacy, and occupation was assessed using multiple logistic regression. Chi-square test, t-test, and multivariate analysis were used. Of the 4800 enumerated subjects, 3850 subjects (1710 males, 2140 females) were examined (response rate, 80.2%). The prevalence of blindness was 0.85% (95% CI 0.6-1.1%) and was positively associated with age and illiteracy. Cataract was the leading cause (57.6%) and glaucoma was the second cause (16.7%) for blindness. The prevalence of low vision was 2.9% (95% CI 2.4-3.4%) and visual impairment (blindness + low vision) was 3.8% (95% CI 3.2-4.4%). The primary causes for low vision were refractive errors (68%) and cataract (22%). In this urban population based study, cataract was the leading cause for blindness and refractive error was the main reason for low vision.
Real time AI expert system for robotic applications
NASA Technical Reports Server (NTRS)
Follin, John F.
1987-01-01
A computer controlled multi-robot process cell to demonstrate advanced technologies for the demilitarization of obsolete chemical munitions was developed. The methods through which the vision system and other sensory inputs were used by the artificial intelligence to provide the information required to direct the robots to complete the desired task are discussed. The mechanisms that the expert system uses to solve problems (goals), the different rule data base, and the methods for adapting this control system to any device that can be controlled or programmed through a high level computer interface are discussed.
NASA Technical Reports Server (NTRS)
Shapiro, Linda G.; Tanimoto, Steven L.; Ahrens, James P.
1996-01-01
The goal of this task was to create a design and prototype implementation of a database environment that is particular suited for handling the image, vision and scientific data associated with the NASA's EOC Amazon project. The focus was on a data model and query facilities that are designed to execute efficiently on parallel computers. A key feature of the environment is an interface which allows a scientist to specify high-level directives about how query execution should occur.
The NASA Computational Fluid Dynamics (CFD) program - Building technology to solve future challenges
NASA Technical Reports Server (NTRS)
Richardson, Pamela F.; Dwoyer, Douglas L.; Kutler, Paul; Povinelli, Louis A.
1993-01-01
This paper presents the NASA Computational Fluid Dynamics program in terms of a strategic vision and goals as well as NASA's financial commitment and personnel levels. The paper also identifies the CFD program customers and the support to those customers. In addition, the paper discusses technical emphasis and direction of the program and some recent achievements. NASA's Ames, Langley, and Lewis Research Centers are the research hubs of the CFD program while the NASA Headquarters Office of Aeronautics represents and advocates the program.
AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data
NASA Astrophysics Data System (ADS)
Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin
2018-01-01
In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.
Landmark navigation and autonomous landing approach with obstacle detection for aircraft
NASA Astrophysics Data System (ADS)
Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.
1997-06-01
A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.
Robust algebraic image enhancement for intelligent control systems
NASA Technical Reports Server (NTRS)
Lerner, Bao-Ting; Morrelli, Michael
1993-01-01
Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.
A dental vision system for accurate 3D tooth modeling.
Zhang, Li; Alemzadeh, K
2006-01-01
This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.
Research on an autonomous vision-guided helicopter
NASA Technical Reports Server (NTRS)
Amidi, Omead; Mesaki, Yuji; Kanade, Takeo
1994-01-01
Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.
Aguilar, Mario; Peot, Mark A; Zhou, Jiangying; Simons, Stephen; Liao, Yuwei; Metwalli, Nader; Anderson, Mark B
2012-03-01
The mammalian visual system is still the gold standard for recognition accuracy, flexibility, efficiency, and speed. Ongoing advances in our understanding of function and mechanisms in the visual system can now be leveraged to pursue the design of computer vision architectures that will revolutionize the state of the art in computer vision.
Automated Grading of Rough Hardwood Lumber
Richard W. Conners; Tai-Hoon Cho; Philip A. Araman
1989-01-01
Any automatic hardwood grading system must have two components. The first of these is a computer vision system for locating and identifying defects on rough lumber. The second is a system for automatically grading boards based on the output of the computer vision system. This paper presents research results aimed at developing the first of these components. The...
Computer Vision Systems for Hardwood Logs and Lumber
Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners
1991-01-01
Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...
Quantification of color vision using a tablet display.
Chacon, Alicia; Rabin, Jeff; Yu, Dennis; Johnston, Shawn; Bradshaw, Timothy
2015-01-01
Accurate color vision is essential for optimal performance in aviation and space environments using nonredundant color coding to convey critical information. Most color tests detect color vision deficiency (CVD) but fail to diagnose type or severity of CVD, which are important to link performance to occupational demands. The computer-based Cone Contrast Test (CCT) diagnoses type and severity of CVD. It is displayed on a netbook computer for clinical application, but a more portable version may prove useful for deployments, space and aviation cockpits, as well as accident and sports medicine settings. Our purpose was to determine if the CCT can be conducted on a tablet display (Windows 8, Microsoft, Seattle, WA) using touch-screen response input. The CCT presents colored letters visible only to red (R), green (G), and blue (B) sensitive retinal cones to determine the lowest R, G, and B cone contrast visible to the observer. The CCT was measured in 16 color vision normals (CVN) and 16 CVDs using the standard netbook computer and a Windows 8 tablet display calibrated to produce equal color contrasts. Both displays showed 100% specificity for confirming CVN and 100% sensitivity for detecting CVD. In CVNs there was no difference between scores on netbook vs. tablet displays. G cone CVDs showed slightly lower G cone CCT scores on the tablet. CVD can be diagnosed with a tablet display. Ease-of-use, portability, and complete computer capabilities make tablets ideal for multiple settings, including aviation, space, military deployments, accidents and rescue missions, and sports vision. Chacon A, Rabin J, Yu D, Johnston S, Bradshaw T. Quantification of color vision using a tablet display.
Heinrich, Andreas; Güttler, Felix; Wendt, Sebastian; Schenkl, Sebastian; Hubig, Michael; Wagner, Rebecca; Mall, Gita; Teichgräber, Ulf
2018-06-18
In forensic odontology the comparison between antemortem and postmortem panoramic radiographs (PRs) is a reliable method for person identification. The purpose of this study was to improve and automate identification of unknown people by comparison between antemortem and postmortem PR using computer vision. The study includes 43 467 PRs from 24 545 patients (46 % females/54 % males). All PRs were filtered and evaluated with Matlab R2014b including the toolboxes image processing and computer vision system. The matching process used the SURF feature to find the corresponding points between two PRs (unknown person and database entry) out of the whole database. From 40 randomly selected persons, 34 persons (85 %) could be reliably identified by corresponding PR matching points between an already existing scan in the database and the most recent PR. The systematic matching yielded a maximum of 259 points for a successful identification between two different PRs of the same person and a maximum of 12 corresponding matching points for other non-identical persons in the database. Hence 12 matching points are the threshold for reliable assignment. Operating with an automatic PR system and computer vision could be a successful and reliable tool for identification purposes. The applied method distinguishes itself by virtue of its fast and reliable identification of persons by PR. This Identification method is suitable even if dental characteristics were removed or added in the past. The system seems to be robust for large amounts of data. · Computer vision allows an automated antemortem and postmortem comparison of panoramic radiographs (PRs) for person identification.. · The present method is able to find identical matching partners among huge datasets (big data) in a short computing time.. · The identification method is suitable even if dental characteristics were removed or added.. · Heinrich A, Güttler F, Wendt S et al. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0632-4744. © Georg Thieme Verlag KG Stuttgart · New York.
Evolution of attention mechanisms for early visual processing
NASA Astrophysics Data System (ADS)
Müller, Thomas; Knoll, Alois
2011-03-01
Early visual processing as a method to speed up computations on visual input data has long been discussed in the computer vision community. The general target of a such approaches is to filter nonrelevant information from the costly higher-level visual processing algorithms. By insertion of this additional filter layer the overall approach can be speeded up without actually changing the visual processing methodology. Being inspired by the layered architecture of the human visual processing apparatus, several approaches for early visual processing have been recently proposed. Most promising in this field is the extraction of a saliency map to determine regions of current attention in the visual field. Such saliency can be computed in a bottom-up manner, i.e. the theory claims that static regions of attention emerge from a certain color footprint, and dynamic regions of attention emerge from connected blobs of textures moving in a uniform way in the visual field. Top-down saliency effects are either unconscious through inherent mechanisms like inhibition-of-return, i.e. within a period of time the attention level paid to a certain region automatically decreases if the properties of that region do not change, or volitional through cognitive feedback, e.g. if an object moves consistently in the visual field. These bottom-up and top-down saliency effects have been implemented and evaluated in a previous computer vision system for the project JAST. In this paper an extension applying evolutionary processes is proposed. The prior vision system utilized multiple threads to analyze the regions of attention delivered from the early processing mechanism. Here, in addition, multiple saliency units are used to produce these regions of attention. All of these saliency units have different parameter-sets. The idea is to let the population of saliency units create regions of attention, then evaluate the results with cognitive feedback and finally apply the genetic mechanism: mutation and cloning of the best performers and extinction of the worst performers considering computation of regions of attention. A fitness function can be derived by evaluating, whether relevant objects are found in the regions created. It can be seen from various experiments, that the approach significantly speeds up visual processing, especially regarding robust ealtime object recognition, compared to an approach not using saliency based preprocessing. Furthermore, the evolutionary algorithm improves the overall performance of the preprocessing system in terms of quality, as the system automatically and autonomously tunes the saliency parameters. The computational overhead produced by periodical clone/delete/mutation operations can be handled well within the realtime constraints of the experimental computer vision system. Nevertheless, limitations apply whenever the visual field does not contain any significant saliency information for some time, but the population still tries to tune the parameters - overfitting avoids generalization in this case and the evolutionary process may be reset by manual intervention.
Development of a vision-targeted health-related quality of life item measure
Slotkin, Jerry; McKean-Cowdin, Roberta; Lee, Paul; Owsley, Cynthia; Vitale, Susan; Varma, Rohit; Gershon, Richard; Hays, Ron D.
2013-01-01
Purpose To develop a vision-targeted health-related quality of life (HRQOL) measure for the NIH Toolbox for the Assessment of Neurological and Behavioral Function. Methods We conducted a review of existing vision-targeted HRQOL surveys and identified color vision, low luminance vision, distance vision, general vision, near vision, ocular symptoms, psychosocial well-being, and role performance domains. Items in existing survey instruments were sorted into these domains. We selected non-redundant items and revised them to improve clarity and to limit the number of different response options. We conducted 10 cognitive interviews to evaluate the items. Finally, we revised the items and administered them to 819 individuals to calibrate the items and estimate the measure’s reliability and validity. Results The field test provided support for the 53-item vision-targeted HRQOL measure encompassing 6 domains: color vision, distance vision, near vision, ocular symptoms, psychosocial well-being, and role performance. The domain scores had high levels of reliability (coefficient alphas ranged from 0.848 to 0.940). Validity was supported by high correlations between National Eye Institute Visual Function Questionnaire scales and the new-vision-targeted scales (highest values were 0.771 between psychosocial well-being and mental health, and 0.729 between role performance and role difficulties), and by lower mean scores in those groups self-reporting eye disease (F statistic with p < 0.01 for all comparisons except cataract with ocular symptoms, psychosocial well-being, and role performance scales). Conclusions This vision-targeted HRQOL measure provides a basis for comprehensive assessment of the impact of eye diseases and treatments on daily functioning and well-being in adults. PMID:23475688
Basic quantitative assessment of visual performance in patients with very low vision.
Bach, Michael; Wilke, Michaela; Wilhelm, Barbara; Zrenner, Eberhart; Wilke, Robert
2010-02-01
A variety of approaches to developing visual prostheses are being pursued: subretinal, epiretinal, via the optic nerve, or via the visual cortex. This report presents a method of comparing their efficacy at genuinely improving visual function, starting at no light perception (NLP). A test battery (a computer program, Basic Assessment of Light and Motion [BaLM]) was developed in four basic visual dimensions: (1) light perception (light/no light), with an unstructured large-field stimulus; (2) temporal resolution, with single versus double flash discrimination; (3) localization of light, where a wedge extends from the center into four possible directions; and (4) motion, with a coarse pattern moving in one of four directions. Two- or four-alternative, forced-choice paradigms were used. The participants' responses were self-paced and delivered with a keypad. The feasibility of the BaLM was tested in 73 eyes of 51 patients with low vision. The light and time test modules discriminated between NLP and light perception (LP). The localization and motion modules showed no significant response for NLP but discriminated between LP and hand movement (HM). All four modules reached their ceilings in the acuity categories higher than HM. BaLM results systematically differed between the very-low-acuity categories NLP, LP, and HM. Light and time yielded similar results, as did localization and motion; still, for assessing the visual prostheses with differing temporal characteristics, they are not redundant. The results suggest that this simple test battery provides a quantitative assessment of visual function in the very-low-vision range from NLP to HM.