Sample records for artificial vision system

  1. Dynamical Systems and Motion Vision.

    DTIC Science & Technology

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  2. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  3. Distant touch hydrodynamic imaging with an artificial lateral line.

    PubMed

    Yang, Yingchen; Chen, Jack; Engel, Jonathan; Pandya, Saunvit; Chen, Nannan; Tucker, Craig; Coombs, Sheryl; Jones, Douglas L; Liu, Chang

    2006-12-12

    Nearly all underwater vehicles and surface ships today use sonar and vision for imaging and navigation. However, sonar and vision systems face various limitations, e.g., sonar blind zones, dark or murky environments, etc. Evolved over millions of years, fish use the lateral line, a distributed linear array of flow sensing organs, for underwater hydrodynamic imaging and information extraction. We demonstrate here a proof-of-concept artificial lateral line system. It enables a distant touch hydrodynamic imaging capability to critically augment sonar and vision systems. We show that the artificial lateral line can successfully perform dipole source localization and hydrodynamic wake detection. The development of the artificial lateral line is aimed at fundamentally enhancing human ability to detect, navigate, and survive in the underwater environment.

  4. Computing Visible-Surface Representations,

    DTIC Science & Technology

    1985-03-01

    Terzopoulos N00014-75-C-0643 9. PERFORMING ORGANIZATION NAME AMC ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Artificial Inteligence Laboratory AREA A...Massachusetts Institute of lechnolog,. Support lbr the laboratory’s Artificial Intelligence research is provided in part by the Advanced Rtccarcl Proj...dynamically maintaining visible surface representations. Whether the intention is to model human vision or to design competent artificial vision systems

  5. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  6. An early underwater artificial vision model in ocean investigations via independent component analysis.

    PubMed

    Nian, Rui; Liu, Fang; He, Bo

    2013-07-16

    Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs).

  7. An Early Underwater Artificial Vision Model in Ocean Investigations via Independent Component Analysis

    PubMed Central

    Nian, Rui; Liu, Fang; He, Bo

    2013-01-01

    Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs). PMID:23863855

  8. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  9. Argus II retinal prosthesis system: a review of patient selection criteria, surgical considerations, and post-operative outcomes.

    PubMed

    Finn, Avni P; Grewal, Dilraj S; Vajzovic, Lejla

    2018-01-01

    Retinitis pigmentosa (RP) is a group of heterogeneous inherited retinal degenerative disorders characterized by progressive rod and cone dysfunction and ensuing photoreceptor loss. Many patients suffer from legal blindness by their 40s or 50s. Artificial vision is considered once patients have lost all vision to the point of bare light perception or no light perception. The Argus II retinal prosthesis system is one such artificial vision device approved for patients with RP. This review focuses on the factors important for patient selection. Careful pre-operative screening, counseling, and management of patient expectations are critical for the successful implantation and visual rehabilitation of patients with the Argus II device.

  10. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

    PubMed

    Kriegeskorte, Nikolaus

    2015-11-24

    Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

  11. [The application and development of artificial intelligence in medical diagnosis systems].

    PubMed

    Chen, Zhencheng; Jiang, Yong; Xu, Mingyu; Wang, Hongyan; Jiang, Dazong

    2002-09-01

    This paper has reviewed the development of artificial intelligence in medical practice and medical diagnostic expert systems, and has summarized the application of artificial neural network. It explains that a source of difficulty in medical diagnostic system is the co-existence of multiple diseases--the potentially inter-related diseases. However, the difficulty of image expert systems is inherent in high-level vision. And it increases the complexity of expert system in medical image. At last, the prospect for the development of artificial intelligence in medical image expert systems is made.

  12. Comparison of Artificial Immune System and Particle Swarm Optimization Techniques for Error Optimization of Machine Vision Based Tool Movements

    NASA Astrophysics Data System (ADS)

    Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod

    2015-10-01

    In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.

  13. Active vision in satellite scene analysis

    NASA Technical Reports Server (NTRS)

    Naillon, Martine

    1994-01-01

    In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.

  14. An overview of artificial intelligence and robotics. Volume 1: Artificial intelligence. Part B: Applications

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1983-01-01

    Artificial Intelligence (AI) is an emerging technology that has recently attracted considerable attention. Many applications are now under development. This report, Part B of a three part report on AI, presents overviews of the key application areas: Expert Systems, Computer Vision, Natural Language Processing, Speech Interfaces, and Problem Solving and Planning. The basic approaches to such systems, the state-of-the-art, existing systems and future trends and expectations are covered.

  15. Quality Control by Artificial Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, Edmond Y.; Gleason, Shaun Scott; Niel, Kurt S.

    2010-01-01

    Computational technology has fundamentally changed many aspects of our lives. One clear evidence is the development of artificial-vision systems, which have effectively automated many manual tasks ranging from quality inspection to quantitative assessment. In many cases, these machine-vision systems are even preferred over manual ones due to their repeatability and high precision. Such advantages come from significant research efforts in advancing sensor technology, illumination, computational hardware, and image-processing algorithms. Similar to the Special Section on Quality Control by Artificial Vision published two years ago in Volume 17, Issue 3 of the Journal of Electronic Imaging, the present one invited papersmore » relevant to fundamental technology improvements to foster quality control by artificial vision, and fine-tuned the technology for specific applications. We aim to balance both theoretical and applied work pertinent to this special section theme. Consequently, we have seven high-quality papers resulting from the stringent peer-reviewing process in place at the Journal of Electronic Imaging. Some of the papers contain extended treatment of the authors work presented at the SPIE Image Processing: Machine Vision Applications conference and the International Conference on Quality Control by Artificial Vision. On the broad application side, Liu et al. propose an unsupervised texture image segmentation scheme. Using a multilayer data condensation spectral clustering algorithm together with wavelet transform, they demonstrate the effectiveness of their approach on both texture and synthetic aperture radar images. A problem related to image segmentation is image extraction. For this, O'Leary et al. investigate the theory of polynomial moments and show how these moments can be compared to classical filters. They also show how to use the discrete polynomial-basis functions for the extraction of 3-D embossed digits, demonstrating superiority over Fourier-basis functions for this task. Image registration is another important task for machine vision. Bingham and Arrowood investigate the implementation and results in applying Fourier phase matching for projection registration, with a particular focus on nondestructive testing using computed tomography. Readers interested in enriching their arsenal of image-processing algorithms for machine-vision tasks should find these papers enriching. Meanwhile, we have four papers dealing with more specific machine-vision tasks. The first one, Yahiaoui et al., is quantitative in nature, using machine vision for real-time passenger counting. Occulsion is a common problem in counting objects and people, and they circumvent this issue with a dense stereovision system, achieving 97 to 99% accuracy in their tests. On the other hand, the second paper by Oswald-Tranta et al. focuses on thermographic crack detection. An infrared camera is used to detect inhomogeneities, which may indicate surface cracks. They describe the various steps in developing fully automated testing equipment aimed at a high throughput. Another paper describing an inspection system is Molleda et al., which handles flatness inspection of rolled products. They employ optical-laser triangulation and 3-D surface reconstruction for this task, showing how these can be achieved in real time. Last but not least, Presles et al. propose a way to monitor the particle-size distribution of batch crystallization processes. This is achieved through a new in situ imaging probe and image-analysis methods. While it is unlikely any reader may be working on these four specific problems at the same time, we are confident that readers will find these papers inspiring and potentially helpful to their own machine-vision system developments.« less

  16. Microwave vision for robots

    NASA Technical Reports Server (NTRS)

    Lewandowski, Leon; Struckman, Keith

    1994-01-01

    Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.

  17. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.

  18. Artificial Intelligence.

    ERIC Educational Resources Information Center

    Wash, Darrel Patrick

    1989-01-01

    Making a machine seem intelligent is not easy. As a consequence, demand has been rising for computer professionals skilled in artificial intelligence and is likely to continue to go up. These workers develop expert systems and solve the mysteries of machine vision, natural language processing, and neural networks. (Editor)

  19. Artificial Intelligence: Underlying Assumptions and Basic Objectives.

    ERIC Educational Resources Information Center

    Cercone, Nick; McCalla, Gordon

    1984-01-01

    Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…

  20. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    NASA Astrophysics Data System (ADS)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  1. A Starter's Guide to Artificial Intelligence.

    ERIC Educational Resources Information Center

    McConnell, Barry A.; McConnell, Nancy J.

    1988-01-01

    Discussion of the history and development of artificial intelligence (AI) highlights a bibliography of introductory books on various aspects of AI, including AI programing; problem solving; automated reasoning; game playing; natural language; expert systems; machine learning; robotics and vision; critics of AI; and representative software. (LRW)

  2. Theory underlying the peripheral vision horizon device

    NASA Technical Reports Server (NTRS)

    Money, K. E.

    1984-01-01

    Peripheral Vision Horizon Device (PVHD) theory states that the likelihood of pilot disorientation in flight is reduced by providing an artificial horizon that provides orientation information to peripheral vision. In considering the validity of the theory, three areas are explored: the use of an artificial horizon device over some other flight instrument; the use of peripheral vision over foveal vision; and the evidence that peripheral vision is well suited to the processing of orientation information.

  3. An imaging system based on laser optical feedback for fog vision applications

    NASA Astrophysics Data System (ADS)

    Belin, E.; Boucher, V.

    2008-08-01

    The Laboratoire Régional des Ponts et Chaussées d'Angers - LRPC of Angers is currently studying the feasability of applying an optical technique based on the principle of the laser optical feedback to long distance fog vision. Optical feedback set up allows the creation of images on roadsigns. To create artificial fog conditions we used a vibrating cell that produces a micro-spray of water according to the principle of acoustic cavitation. To scale the sensitivity of the system under duplicatible conditions we also used optical densities linked to first-sight visibility distances. The current system produces, in a few seconds, 200 × 200 pixel images of a roadsign seen through dense artificial fog.

  4. Artificial Intelligence and the High School Computer Curriculum.

    ERIC Educational Resources Information Center

    Dillon, Richard W.

    1993-01-01

    Describes a four-part curriculum that can serve as a model for incorporating artificial intelligence (AI) into the high school computer curriculum. The model includes examining questions fundamental to AI, creating and designing an expert system, language processing, and creating programs that integrate machine vision with robotics and…

  5. Micro-optical artificial compound eyes.

    PubMed

    Duparré, J W; Wippermann, F C

    2006-03-01

    Natural compound eyes combine small eye volumes with a large field of view at the cost of comparatively low spatial resolution. For small invertebrates such as flies or moths, compound eyes are the perfectly adapted solution to obtaining sufficient visual information about their environment without overloading their brains with the necessary image processing. However, to date little effort has been made to adopt this principle in optics. Classical imaging always had its archetype in natural single aperture eyes which, for example, human vision is based on. But a high-resolution image is not always required. Often the focus is on very compact, robust and cheap vision systems. The main question is consequently: what is the better approach for extremely miniaturized imaging systems-just scaling of classical lens designs or being inspired by alternative imaging principles evolved by nature in the case of small insects? In this paper, it is shown that such optical systems can be achieved using state-of-the-art micro-optics technology. This enables the generation of highly precise and uniform microlens arrays and their accurate alignment to the subsequent optics-, spacing- and optoelectronics structures. The results are thin, simple and monolithic imaging devices with a high accuracy of photolithography. Two different artificial compound eye concepts for compact vision systems have been investigated in detail: the artificial apposition compound eye and the cluster eye. Novel optical design methods and characterization tools were developed to allow the layout and experimental testing of the planar micro-optical imaging systems, which were fabricated for the first time by micro-optics technology. The artificial apposition compound eye can be considered as a simple imaging optical sensor while the cluster eye is capable of becoming a valid alternative to classical bulk objectives but is much more complex than the first system.

  6. Receptive fields and the theory of discriminant operators

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Hungenahally, Suresh K.

    1991-02-01

    Biological basis for machine vision is a notion which is being used extensively for the development of machine vision systems for various applications. In this paper we have made an attempt to emulate the receptive fields that exist in the biological visual channels. In particular we have exploited the notion of receptive fields for developing the mathematical functions named as discriminantfunctions for the extraction of transition information from signals and multi-dimensional signals and images. These functions are found to be useful for the development of artificial receptive fields for neuro-vision systems. 1.

  7. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  8. The 1991 Goddard Conference on Space Applications of Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Rash, James L. (Editor)

    1991-01-01

    The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in this proceeding fall into the following areas: Planning and scheduling, fault monitoring/diagnosis/recovery, machine vision, robotics, system development, information management, knowledge acquisition and representation, distributed systems, tools, neural networks, and miscellaneous applications.

  9. Strategic Computing. New-Generation Computing Technology: A Strategic Plan for Its Development and Application to Critical Problems in Defense

    DTIC Science & Technology

    1983-10-28

    Computing. By seizing an opportunity to leverage recent advances in artificial intelligence, computer science, and microelectronics, the Agency plans...occurred in many separated areas of artificial intelligence, computer science, and microelectronics. Advances in "expert system" technology now...and expert knowledge o Advances in Artificial Intelligence: Mechanization of speech recognition, vision, and natural language understanding. o

  10. A neural network based artificial vision system for licence plate recognition.

    PubMed

    Draghici, S

    1997-02-01

    This paper presents a neural network based artificial vision system able to analyze the image of a car given by a camera, locate the registration plate and recognize the registration number of the car. The paper describes in detail various practical problems encountered in implementing this particular application and the solutions used to solve them. The main features of the system presented are: controlled stability-plasticity behavior, controlled reliability threshold, both off-line and on-line learning, self assessment of the output reliability and high reliability based on high level multiple feedback. The system has been designed using a modular approach. Sub-modules can be upgraded and/or substituted independently, thus making the system potentially suitable in a large variety of vision applications. The OCR engine was designed as an interchangeable plug-in module. This allows the user to choose an OCR engine which is suited to the particular application and to upgrade it easily in the future. At present, there are several versions of this OCR engine. One of them is based on a fully connected feedforward artificial neural network with sigmoidal activation functions. This network can be trained with various training algorithms such as error backpropagation. An alternative OCR engine is based on the constraint based decomposition (CBD) training architecture. The system has showed the following performances (on average) on real-world data: successful plate location and segmentation about 99%, successful character recognition about 98% and successful recognition of complete registration plates about 80%.

  11. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  12. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  13. Development of a Laparoscopic Box Trainer Based on Open Source Hardware and Artificial Intelligence for Objective Assessment of Surgical Psychomotor Skills.

    PubMed

    Alonso-Silverio, Gustavo A; Pérez-Escamirosa, Fernando; Bruno-Sanchez, Raúl; Ortiz-Simon, José L; Muñoz-Guerrero, Roberto; Minor-Martinez, Arturo; Alarcón-Paredes, Antonio

    2018-05-01

    A trainer for online laparoscopic surgical skills assessment based on the performance of experts and nonexperts is presented. The system uses computer vision, augmented reality, and artificial intelligence algorithms, implemented into a Raspberry Pi board with Python programming language. Two training tasks were evaluated by the laparoscopic system: transferring and pattern cutting. Computer vision libraries were used to obtain the number of transferred points and simulated pattern cutting trace by means of tracking of the laparoscopic instrument. An artificial neural network (ANN) was trained to learn from experts and nonexperts' behavior for pattern cutting task, whereas the assessment of transferring task was performed using a preestablished threshold. Four expert surgeons in laparoscopic surgery, from hospital "Raymundo Abarca Alarcón," constituted the experienced class for the ANN. Sixteen trainees (10 medical students and 6 residents) without laparoscopic surgical skills and limited experience in minimal invasive techniques from School of Medicine at Universidad Autónoma de Guerrero constituted the nonexperienced class. Data from participants performing 5 daily repetitions for each task during 5 days were used to build the ANN. The participants tend to improve their learning curve and dexterity with this laparoscopic training system. The classifier shows mean accuracy and receiver operating characteristic curve of 90.98% and 0.93, respectively. Moreover, the ANN was able to evaluate the psychomotor skills of users into 2 classes: experienced or nonexperienced. We constructed and evaluated an affordable laparoscopic trainer system using computer vision, augmented reality, and an artificial intelligence algorithm. The proposed trainer has the potential to increase the self-confidence of trainees and to be applied to programs with limited resources.

  14. Home Camera-Based Fall Detection System for the Elderly.

    PubMed

    de Miguel, Koldo; Brunete, Alberto; Hernando, Miguel; Gambao, Ernesto

    2017-12-09

    Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow) as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%.

  15. Home Camera-Based Fall Detection System for the Elderly

    PubMed Central

    de Miguel, Koldo

    2017-01-01

    Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow) as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%. PMID:29232846

  16. Prediction of pork loin quality using online computer vision system and artificial intelligence model.

    PubMed

    Sun, Xin; Young, Jennifer; Liu, Jeng-Hung; Newman, David

    2018-06-01

    The objective of this project was to develop a computer vision system (CVS) for objective measurement of pork loin under industry speed requirement. Color images of pork loin samples were acquired using a CVS. Subjective color and marbling scores were determined according to the National Pork Board standards by a trained evaluator. Instrument color measurement and crude fat percentage were used as control measurements. Image features (18 color features; 1 marbling feature; 88 texture features) were extracted from whole pork loin color images. Artificial intelligence prediction model (support vector machine) was established for pork color and marbling quality grades. The results showed that CVS with support vector machine modeling reached the highest prediction accuracy of 92.5% for measured pork color score and 75.0% for measured pork marbling score. This research shows that the proposed artificial intelligence prediction model with CVS can provide an effective tool for predicting color and marbling in the pork industry at online speeds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Computer vision in cell biology.

    PubMed

    Danuser, Gaudenz

    2011-11-23

    Computer vision refers to the theory and implementation of artificial systems that extract information from images to understand their content. Although computers are widely used by cell biologists for visualization and measurement, interpretation of image content, i.e., the selection of events worth observing and the definition of what they mean in terms of cellular mechanisms, is mostly left to human intuition. This Essay attempts to outline roles computer vision may play and should play in image-based studies of cellular life. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Vertically integrated photonic multichip module architecture for vision applications

    NASA Astrophysics Data System (ADS)

    Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong

    2000-05-01

    The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.

  19. Head mounted DMD based projection system for natural and prosthetic visual stimulation in freely moving rats.

    PubMed

    Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi

    2016-10-12

    Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal's retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals.

  20. Head mounted DMD based projection system for natural and prosthetic visual stimulation in freely moving rats

    PubMed Central

    Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi

    2016-01-01

    Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal’s retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals. PMID:27731346

  1. Head mounted DMD based projection system for natural and prosthetic visual stimulation in freely moving rats

    NASA Astrophysics Data System (ADS)

    Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi

    2016-10-01

    Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal’s retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals.

  2. Northeast Artificial Intelligence Consortium Annual Report - 1988 Parallel Vision. Volume 9

    DTIC Science & Technology

    1989-10-01

    supports the Northeast Aritificial Intelligence Consortium (NAIC). Volume 9 Parallel Vision Report submitted by Christopher M. Brown Randal C. Nelson...NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT - 1988 Parallel Vision Syracuse University Christopher M. Brown and Randal C. Nelson...Technical Director Directorate of Intelligence & Reconnaissance FOR THE COMMANDER: IGOR G. PLONISCH Directorate of Plans & Programs If your address has

  3. An artificial elementary eye with optic flow detection and compositional properties.

    PubMed

    Pericet-Camara, Ramon; Dobrzynski, Michal K; Juston, Raphaël; Viollet, Stéphane; Leitel, Robert; Mallot, Hanspeter A; Floreano, Dario

    2015-08-06

    We describe a 2 mg artificial elementary eye whose structure and functionality is inspired by compound eye ommatidia. Its optical sensitivity and electronic architecture are sufficient to generate the required signals for the measurement of local optic flow vectors in multiple directions. Multiple elementary eyes can be assembled to create a compound vision system of desired shape and curvature spanning large fields of view. The system configurability is validated with the fabrication of a flexible linear array of artificial elementary eyes capable of extracting optic flow over multiple visual directions. © 2015 The Author(s).

  4. Biological Basis For Computer Vision: Some Perspectives

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.

    1990-03-01

    Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.

  5. Artificial Intelligence/Robotics Applications to Navy Aircraft Maintenance.

    DTIC Science & Technology

    1984-06-01

    other automatic machinery such as presses, molding machines , and numerically-controlled machine tools, just as people do. A-36...Robotics Technologies 3 B. Relevant AI Technologies 4 1. Expert Systems 4 2. Automatic Planning 4 3. Natural Language 5 4. Machine Vision...building machines that imitate human behavior. Artificial intelligence is concerned with the functions of the brain, whereas robotics include, in

  6. An investigation into the perceptual embodiment of an artificial hand using transcutaneous electrical nerve stimulation (TENS) in intact-limbed individuals.

    PubMed

    Mulvey, Matthew; Fawkner, Helen; Johnson, Mark I

    2014-01-01

    Perceptual embodiment of an artificial limb aids manual control of prostheses and can be facilitated by somatosensory feedback. We hypothesised that transcutaneous electrical nerve stimulation (TENS) may facilitate perceptual embodiment of artificial limbs. To determine the effect of TENS on perceptual embodiment of an artificial hand in 32 intact-limbed participants. Participants were exposed to four experimental conditions in four counterbalanced blocks: (i) Vision (V) watching an artificial hand positioned congruently to the real hand (out of view); (ii) Vision and strong non-painful TENS in the real hand (V+T); Vision and Stroking (V+S) of the artificial and real hand with a brush; Vision, Stroking and TENS (V+S+T) watching artificial hand being stroked whilst real hand was stroked and receiving TENS. Repeated measure ANOVA detected effects for Condition (P< 0.001), Block (P< 0.001) and Condition x Block interaction (P< 0.001). Pairwise comparisons detected more intense perceptual embodiment for V+S+T compared with V (P< 0.001) and V+T (P< 0.001), and for V+S compared with V (P< 0.001) and V+T (P< 0.001).The intensity of perceptual embodiment increased for later blocks (P< 0.001). A sensation of TENS was generated within the artificial hand in individuals with intact limbs and this facilitated perceptual embodiment. The magnitude of effect was modest.

  7. A critical review on the applications of artificial neural networks in winemaking technology.

    PubMed

    Moldes, O A; Mejuto, J C; Rial-Otero, R; Simal-Gandara, J

    2017-09-02

    Since their development in 1943, artificial neural networks were extended into applications in many fields. Last twenty years have brought their introduction into winery, where they were applied following four basic purposes: authenticity assurance systems, electronic sensory devices, production optimization methods, and artificial vision in image treatment tools, with successful and promising results. This work reviews the most significant approaches for neural networks in winemaking technologies with the aim of producing a clear and useful review document.

  8. The future of radiology augmented with Artificial Intelligence: A strategy for success.

    PubMed

    Liew, Charlene

    2018-05-01

    The rapid development of Artificial Intelligence/deep learning technology and its implementation into routine clinical imaging will cause a major transformation to the practice of radiology. Strategic positioning will ensure the successful transition of radiologists into their new roles as augmented clinicians. This paper describes an overall vision on how to achieve a smooth transition through the practice of augmented radiology where radiologists-in-the-loop ensure the safe implementation of Artificial Intelligence systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. High-Level Vision and Planning Workshop Proceedings

    DTIC Science & Technology

    1989-08-01

    Correspondence in Line Drawings of Multiple View-. In Proc. of 8th Intern. Joint Conf. on Artificial intellignece . 1983. [63] Tomiyasu, K. Tutorial...joint U.S.-Israeli workshop on artificial intelligence are provided in this Institute for Defense Analyses document. This document is based on a broad...participants is provided along with applicable references for individual papers. 14. SUBJECT TERMS 15. NUMBER OF PAGES Artificial Intelligence; Machine Vision

  10. Modeling and simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanham, R.; Vogt, W.G.; Mickle, M.H.

    1986-01-01

    This book presents the papers given at a conference on computerized simulation. Topics considered at the conference included expert systems, modeling in electric power systems, power systems operating strategies, energy analysis, a linear programming approach to optimum load shedding in transmission systems, econometrics, simulation in natural gas engineering, solar energy studies, artificial intelligence, vision systems, hydrology, multiprocessors, and flow models.

  11. Research and applications: Artificial intelligence

    NASA Technical Reports Server (NTRS)

    Raphael, B.; Duda, R. O.; Fikes, R. E.; Hart, P. E.; Nilsson, N. J.; Thorndyke, P. W.; Wilber, B. M.

    1971-01-01

    Research in the field of artificial intelligence is discussed. The focus of recent work has been the design, implementation, and integration of a completely new system for the control of a robot that plans, learns, and carries out tasks autonomously in a real laboratory environment. The computer implementation of low-level and intermediate-level actions; routines for automated vision; and the planning, generalization, and execution mechanisms are reported. A scenario that demonstrates the approximate capabilities of the current version of the entire robot system is presented.

  12. System of error detection in the manufacture of garments using artificial vision

    NASA Astrophysics Data System (ADS)

    Moreno, J. J.; Aguila, A.; Partida, E.; Martinez, C. L.; Morales, O.; Tejeida, R.

    2017-12-01

    A computer vision system is implemented to detect errors in the cutting stage within the manufacturing process of garments in the textile industry. It provides solution to errors within the process that cannot be easily detected by any employee, in addition to significantly increase the speed of quality review. In the textile industry as in many others, quality control is required in manufactured products and this has been carried out manually by means of visual inspection by employees over the years. For this reason, the objective of this project is to design a quality control system using computer vision to identify errors in the cutting stage within the garment manufacturing process to increase the productivity of textile processes by reducing costs.

  13. Artificial organs: recent progress in artificial hearing and vision.

    PubMed

    Ifukube, Tohru

    2009-01-01

    Artificial sensory organs are a prosthetic means of sending visual or auditory information to the brain by electrical stimulation of the optic or auditory nerves to assist visually impaired or hearing-impaired people. However, clinical application of artificial sensory organs, except for cochlear implants, is still a trial-and-error process. This is because how and where the information transmitted to the brain is processed is still unknown, and also because changes in brain function (plasticity) remain unknown, even though brain plasticity plays an important role in meaningful interpretation of new sensory stimuli. This article discusses some basic unresolved issues and potential solutions in the development of artificial sensory organs such as cochlear implants, brainstem implants, artificial vision, and artificial retinas.

  14. Fuzzy Petri nets to model vision system decisions within a flexible manufacturing system

    NASA Astrophysics Data System (ADS)

    Hanna, Moheb M.; Buck, A. A.; Smith, R.

    1994-10-01

    The paper presents a Petri net approach to modelling, monitoring and control of the behavior of an FMS cell. The FMS cell described comprises a pick and place robot, vision system, CNC-milling machine and 3 conveyors. The work illustrates how the block diagrams in a hierarchical structure can be used to describe events at different levels of abstraction. It focuses on Fuzzy Petri nets (Fuzzy logic with Petri nets) including an artificial neural network (Fuzzy Neural Petri nets) to model and control vision system decisions and robot sequences within an FMS cell. This methodology can be used as a graphical modelling tool to monitor and control the imprecise, vague and uncertain situations, and determine the quality of the output product of an FMS cell.

  15. Institute for Aviation Research and Development Research Project

    DTIC Science & Technology

    1989-01-01

    Symbolics Artificial Intelligence * Vision Systems * Finite Element Modeling ( NASTRAN ) * Aerodynamic Paneling (VSAERO) Projects: * Software...34Wall Functions for k and epsilon for Turbulent Flow Through Rough and Smooth Pipes," Eleventh International Symposium on Turbulence, October 17-19, 1988

  16. Comparative randomised controlled clinical trial of a herbal eye drop with artificial tear and placebo in computer vision syndrome.

    PubMed

    Biswas, N R; Nainiwal, S K; Das, G K; Langan, U; Dadeya, S C; Mongre, P K; Ravi, A K; Baidya, P

    2003-03-01

    A comparative randomised double masked multicentric clinical trial has been conducted to find out the efficacy and safety of a herbal eye drop preparation, itone eye drops with artificial tear and placebo in 120 patients with computer vision syndrome. Patients using computer for at least 2 hours continuosly per day having symptoms of irritation, foreign body sensation, watering, redness, headache, eyeache and signs of conjunctival congestion, mucous/debris, corneal filaments, corneal staining or lacrimal lake were included in this study. Every patient was instructed to put two drops of either herbal drugs or placebo or artificial tear in the eyes regularly four times for 6 weeks. Objective and subjective findings were recorded at bi-weekly intervals up to six weeks. Side-effects, if any, were also noted. In computer vision syndrome the herbal eye drop preparation was found significantly better than artificial tear (p < 0.01). No side-effects were noted by any of the drugs. Both subjective and objective improvements were observed in itone treated cases. So, itone can be considered as a useful drug in computer vision syndrome.

  17. Proceedings of the international conference on cybernetics and societ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-01-01

    This book presents the papers given at a conference on artificial intelligence, expert systems and knowledge bases. Topics considered at the conference included automating expert system development, modeling expert systems, causal maps, data covariances, robot vision, image processing, multiprocessors, parallel processing, VLSI structures, man-machine systems, human factors engineering, cognitive decision analysis, natural language, computerized control systems, and cybernetics.

  18. The software for automatic creation of the formal grammars used by speech recognition, computer vision, editable text conversion systems, and some new functions

    NASA Astrophysics Data System (ADS)

    Kardava, Irakli; Tadyszak, Krzysztof; Gulua, Nana; Jurga, Stefan

    2017-02-01

    For more flexibility of environmental perception by artificial intelligence it is needed to exist the supporting software modules, which will be able to automate the creation of specific language syntax and to make a further analysis for relevant decisions based on semantic functions. According of our proposed approach, of which implementation it is possible to create the couples of formal rules of given sentences (in case of natural languages) or statements (in case of special languages) by helping of computer vision, speech recognition or editable text conversion system for further automatic improvement. In other words, we have developed an approach, by which it can be achieved to significantly improve the training process automation of artificial intelligence, which as a result will give us a higher level of self-developing skills independently from us (from users). At the base of our approach we have developed a software demo version, which includes the algorithm and software code for the entire above mentioned component's implementation (computer vision, speech recognition and editable text conversion system). The program has the ability to work in a multi - stream mode and simultaneously create a syntax based on receiving information from several sources.

  19. Performance benefits and limitations of a camera network

    NASA Astrophysics Data System (ADS)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  20. Artificial Intelligence Techniques for Automatic Screening of Amblyogenic Factors

    PubMed Central

    Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard

    2008-01-01

    Purpose To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. Methods In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. Results The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the “gold standard” specialist examination with a “refer/do not refer” decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than −7. Conclusions Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years. PMID:19277222

  1. Artificial intelligence techniques for automatic screening of amblyogenic factors.

    PubMed

    Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard

    2008-01-01

    To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the "gold standard" specialist examination with a "refer/do not refer" decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than -7. Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years.

  2. Directly Comparing Computer and Human Performance in Language Understanding and Visual Reasoning.

    ERIC Educational Resources Information Center

    Baker, Eva L.; And Others

    Evaluation models are being developed for assessing artificial intelligence (AI) systems in terms of similar performance by groups of people. Natural language understanding and vision systems are the areas of concentration. In simplest terms, the goal is to norm a given natural language system's performance on a sample of people. The specific…

  3. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  4. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  5. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    PubMed

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  6. Landmark navigation and autonomous landing approach with obstacle detection for aircraft

    NASA Astrophysics Data System (ADS)

    Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.

    1997-06-01

    A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.

  7. Computer Vision System For Locating And Identifying Defects In Hardwood Lumber

    NASA Astrophysics Data System (ADS)

    Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.

    1989-03-01

    This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.

  8. A novel method of robot location using RFID and stereo vision

    NASA Astrophysics Data System (ADS)

    Chen, Diansheng; Zhang, Guanxin; Li, Zhen

    2012-04-01

    This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.

  9. Fourth Conference on Artificial Intelligence for Space Applications

    NASA Technical Reports Server (NTRS)

    Odell, Stephen L. (Compiler); Denton, Judith S. (Compiler); Vereen, Mary (Compiler)

    1988-01-01

    Proceedings of a conference held in Huntsville, Alabama, on November 15-16, 1988. The Fourth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: space applications of expert systems in fault diagnostics, in telemetry monitoring and data collection, in design and systems integration; and in planning and scheduling; knowledge representation, capture, verification, and management; robotics and vision; adaptive learning; and automatic programming.

  10. An artificial vision solution for reusing discarded parts resulted after a manufacturing process

    NASA Astrophysics Data System (ADS)

    Cohal, V.; Cohal, A.

    2016-08-01

    The profit of a factory can be improved by reusing the discarded components produced. This paper is based on the case of a manufacturing process where rectangular metallic sheets of different sizes are produced. Using an artificial vision system, the shapes and the sizes of the produced parts can be determined. Those sheets which do not respect the requirements imposed are labeled as discarded. Instead of throwing these parts, a decision algorithm can analyze if another metallic sheet with smaller dimensions can be obtained from these. Two methods of decision are presented in this paper, considering the restriction that the sides of the new sheet has to be parallel with the axis of the coordinate system. The coordinates of each new part obtained from a discarded sheet are computed in order to be delivered to a milling machine. Details about implementing these algorithms (image processing and decision respectively) in the MATLAB environment using Image Processing Toolbox are given.

  11. Quality detection system and method of micro-accessory based on microscopic vision

    NASA Astrophysics Data System (ADS)

    Li, Dongjie; Wang, Shiwei; Fu, Yu

    2017-10-01

    Considering that the traditional manual detection of micro-accessory has some problems, such as heavy workload, low efficiency and large artificial error, a kind of quality inspection system of micro-accessory has been designed. Micro-vision technology has been used to inspect quality, which optimizes the structure of the detection system. The stepper motor is used to drive the rotating micro-platform to transfer quarantine device and the microscopic vision system is applied to get graphic information of micro-accessory. The methods of image processing and pattern matching, the variable scale Sobel differential edge detection algorithm and the improved Zernike moments sub-pixel edge detection algorithm are combined in the system in order to achieve a more detailed and accurate edge of the defect detection. The grade at the edge of the complex signal can be achieved accurately by extracting through the proposed system, and then it can distinguish the qualified products and unqualified products with high precision recognition.

  12. An analog VLSI chip emulating polarization vision of Octopus retina.

    PubMed

    Momeni, Massoud; Titus, Albert H

    2006-01-01

    Biological systems provide a wealth of information which form the basis for human-made artificial systems. In this work, the visual system of Octopus is investigated and its polarization sensitivity mimicked. While in actual Octopus retina, polarization vision is mainly based on the orthogonal arrangement of its photoreceptors, our implementation uses a birefringent micropolarizer made of YVO4 and mounted on a CMOS chip with neuromorphic circuitry to process linearly polarized light. Arranged in an 8 x 5 array with two photodiodes per pixel, each consuming typically 10 microW, this circuitry mimics both the functionality of individual Octopus retina cells by computing the state of polarization and the interconnection of these cells through a bias-controllable resistive network.

  13. Mobility and orientation aid for blind persons using artificial vision

    NASA Astrophysics Data System (ADS)

    Costa, Gustavo; Gusberti, Adrián; Graffigna, Juan Pablo; Guzzo, Martín; Nasisi, Oscar

    2007-11-01

    Blind or vision-impaired persons are limited in their normal life activities. Mobility and orientation of blind persons is an ever-present research subject because no total solution has yet been reached for these activities that pose certain risks for the affected persons. The current work presents the design and development of a device conceived on capturing environment information through stereoscopic vision. The images captured by a couple of video cameras are transferred and processed by configurable and sequential FPGA and DSP devices that issue action signals to a tactile feedback system. Optimal processing algorithms are implemented to perform this feedback in real time. The components selected permit portability; that is, to readily get used to wearing the device.

  14. Organic-based tristimuli colorimeter

    NASA Astrophysics Data System (ADS)

    Antognazza, M. R.; Scherf, U.; Monti, P.; Lanzani, G.

    2007-04-01

    The authors realize three photodiodes based on organic materials, which have photoresponse curves matched to the colorimetric functions of the standard observer. Such a system of detectors is used for realizing a three-stimuli colorimeter. They report the result of measurements in different spectral areas and suggest possible application of the device in color science and artificial vision.

  15. A light-stimulated synaptic device based on graphene hybrid phototransistor

    NASA Astrophysics Data System (ADS)

    Qin, Shuchao; Wang, Fengqiu; Liu, Yujie; Wan, Qing; Wang, Xinran; Xu, Yongbing; Shi, Yi; Wang, Xiaomu; Zhang, Rong

    2017-09-01

    Neuromorphic chips refer to an unconventional computing architecture that is modelled on biological brains. They are increasingly employed for processing sensory data for machine vision, context cognition, and decision making. Despite rapid advances, neuromorphic computing has remained largely an electronic technology, making it a challenge to access the superior computing features provided by photons, or to directly process vision data that has increasing importance to artificial intelligence. Here we report a novel light-stimulated synaptic device based on a graphene-carbon nanotube hybrid phototransistor. Significantly, the device can respond to optical stimuli in a highly neuron-like fashion and exhibits flexible tuning of both short- and long-term plasticity. These features combined with the spatiotemporal processability make our device a capable counterpart to today’s electrically-driven artificial synapses, with superior reconfigurable capabilities. In addition, our device allows for generic optical spike processing, which provides a foundation for more sophisticated computing. The silicon-compatible, multifunctional photosensitive synapse opens up a new opportunity for neural networks enabled by photonics and extends current neuromorphic systems in terms of system complexities and functionalities.

  16. Optoelectronic instrumentation enhancement using data mining feedback for a 3D measurement system

    NASA Astrophysics Data System (ADS)

    Flores-Fuentes, Wendy; Sergiyenko, Oleg; Gonzalez-Navarro, Félix F.; Rivas-López, Moisés; Hernandez-Balbuena, Daniel; Rodríguez-Quiñonez, Julio C.; Tyrsa, Vera; Lindner, Lars

    2016-12-01

    3D measurement by a cyber-physical system based on optoelectronic scanning instrumentation has been enhanced by outliers and regression data mining feedback. The prototype has applications in (1) industrial manufacturing systems that include: robotic machinery, embedded vision, and motion control, (2) health care systems for measurement scanning, and (3) infrastructure by providing structural health monitoring. This paper presents new research performed in data processing of a 3D measurement vision sensing database. Outliers from multivariate data have been detected and removal to improve artificial intelligence regression algorithm results. Physical measurement error regression data has been used for 3D measurements error correction. Concluding, that the joint of physical phenomena, measurement and computation is an effectiveness action for feedback loops in the control of industrial, medical and civil tasks.

  17. Stereo Image Ranging For An Autonomous Robot Vision System

    NASA Astrophysics Data System (ADS)

    Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven

    1985-12-01

    The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.

  18. Proceedings of the 1986 IEEE international conference on systems, man and cybernetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1986-01-01

    This book presents the papers given at a conference on man-machine systems. Topics considered at the conference included neural model-based cognitive theory and engineering, user interfaces, adaptive and learning systems, human interaction with robotics, decision making, the testing and evaluation of expert systems, software development, international conflict resolution, intelligent interfaces, automation in man-machine system design aiding, knowledge acquisition in expert systems, advanced architectures for artificial intelligence, pattern recognition, knowledge bases, and machine vision.

  19. Proactive learning for artificial cognitive systems

    NASA Astrophysics Data System (ADS)

    Lee, Soo-Young

    2010-04-01

    The Artificial Cognitive Systems (ACS) will be developed for human-like functions such as vision, auditory, inference, and behavior. Especially, computational models and artificial HW/SW systems will be devised for Proactive Learning (PL) and Self-Identity (SI). The PL model provides bilateral interactions between robot and unknown environment (people, other robots, cyberspace). For the situation awareness in unknown environment it is required to receive audiovisual signals and to accumulate knowledge. If the knowledge is not enough, the PL should improve by itself though internet and others. For human-oriented decision making it is also required for the robot to have self-identify and emotion. Finally, the developed models and system will be mounted on a robot for the human-robot co-existing society. The developed ACS will be tested against the new Turing Test for the situation awareness. The Test problems will consist of several video clips, and the performance of the ACSs will be compared against those of human with several levels of cognitive ability.

  20. Automation and robotics for Space Station in the twenty-first century

    NASA Technical Reports Server (NTRS)

    Willshire, K. F.; Pivirotto, D. L.

    1986-01-01

    Space Station telerobotics will evolve beyond the initial capability into a smarter and more capable system as we enter the twenty-first century. Current technology programs including several proposed ground and flight experiments to enable development of this system are described. Advancements in the areas of machine vision, smart sensors, advanced control architecture, manipulator joint design, end effector design, and artificial intelligence will provide increasingly more autonomous telerobotic systems.

  1. Deep learning-based artificial vision for grasp classification in myoelectric hands.

    PubMed

    Ghazaei, Ghazal; Alameer, Ali; Degenaar, Patrick; Morgan, Graham; Nazarpour, Kianoush

    2017-06-01

    Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at [Formula: see text] intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. The classification accuracy in the offline tests reached [Formula: see text] for the seen and [Formula: see text] for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of [Formula: see text] in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb Ultra TM prosthetic hand and a motion control TM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to [Formula: see text]. In addition, we show that with training, subjects' performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.

  2. Deep learning-based artificial vision for grasp classification in myoelectric hands

    NASA Astrophysics Data System (ADS)

    Ghazaei, Ghazal; Alameer, Ali; Degenaar, Patrick; Morgan, Graham; Nazarpour, Kianoush

    2017-06-01

    Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at {{5}\\circ} intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached 85 % for the seen and 75 % for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of 84 % in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to 88 % . In addition, we show that with training, subjects’ performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.

  3. Analog "neuronal" networks in early vision.

    PubMed Central

    Koch, C; Marroquin, J; Yuille, A

    1986-01-01

    Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172

  4. Flying by Ear: Blind Flight with a Music-Based Artificial Horizon

    NASA Technical Reports Server (NTRS)

    Simpson, Brian D.; Brungart, Douglas S.; Dallman, Ronald C.; Yasky, Richard J., Jr.; Romigh, Griffin

    2008-01-01

    Two experiments were conducted in actual flight operations to evaluate an audio artificial horizon display that imposed aircraft attitude information on pilot-selected music. The first experiment examined a pilot's ability to identify, with vision obscured, a change in aircraft roll or pitch, with and without the audio artificial horizon display. The results suggest that the audio horizon display improves the accuracy of attitude identification overall, but differentially affects response time across conditions. In the second experiment, subject pilots performed recoveries from displaced aircraft attitudes using either standard visual instruments, or, with vision obscured, the audio artificial horizon display. The results suggest that subjects were able to maneuver the aircraft to within its safety envelope. Overall, pilots were able to benefit from the display, suggesting that such a display could help to improve overall safety in general aviation.

  5. Teaching Children Thinking. Artificial Intelligence Memo Number 247.

    ERIC Educational Resources Information Center

    Papert, Seymour

    It is possible to maintain a vision of a technologically oriented educational system which is grander than the current one in which new gadgets are used to teach the old material in a thinly disguised old way. Educational innovation, particularly when computers are included, can find better things for children to do and better ways for the child…

  6. Using Intelligent System Approaches for Simulation of Electricity Markets

    NASA Astrophysics Data System (ADS)

    Hamagami, Tomoki

    Significances and approaches of applying intelligent systems to artificial electricity market is discussed. In recent years, with the moving into restructuring of electric system in Japan, the deregulation for the electric market is progressing. The most major change of the market is a founding of JEPX (Japan Electric Power eXchange.) which is expected to help lower power bills through effective use of surplus electricity. The electricity market designates exchange of electric power between electric power suppliers (supplier agents) themselves. In the market, the goal of each supplier agents is to maximize its revenue for the entire trading period, and shows complex behavior, which can model by a multiagent platform. Using the multiagent simulations which have been studied as “artificial market" helps to predict the spot prices, to plan investments, and to discuss the rules of market. Moreover, intelligent system approaches provide for constructing more reasonable policies of each agents. This article, first, makes a brief summary of the electricity market in Japan and the studies of artificial markets. Then, a survey of tipical studies of artificial electricity market is listed. Through these topics, the future vision is presented for the studies.

  7. STRUTEX: A prototype knowledge-based system for initially configuring a structure to support point loads in two dimensions

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Feyock, Stefan; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    The purpose of this research effort is to investigate the benefits that might be derived from applying artificial intelligence tools in the area of conceptual design. Therefore, the emphasis is on the artificial intelligence aspects of conceptual design rather than structural and optimization aspects. A prototype knowledge-based system, called STRUTEX, was developed to initially configure a structure to support point loads in two dimensions. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user by integrating a knowledge base interface and inference engine, a data base interface, and graphics while keeping the knowledge base and data base files separate. The system writes a file which can be input into a structural synthesis system, which combines structural analysis and optimization.

  8. The Potential of Artificial Intelligence in Aids for the Disabled.

    ERIC Educational Resources Information Center

    Boyer, John J.

    The paper explores the possibilities for applying the knowledge of artificial intelligence (AI) research to aids for the disabled. Following a definition of artificial intelligence, the paper reviews areas of basic AI research, such as computer vision, machine learning, and planning and problem solving. Among application areas relevant to the…

  9. Control of gaze in natural environments: effects of rewards and costs, uncertainty and memory in target selection.

    PubMed

    Hayhoe, Mary M; Matthis, Jonathan Samir

    2018-08-06

    The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.

  10. Design and development of a ferroelectric micro photo detector for the bionic eye

    NASA Astrophysics Data System (ADS)

    Song, Yang

    Driven by no effective therapy for Retinitis Pigmentosa and Age Related Macular Degeneration, artificial vision through the development of an artificial retina that can be implanted into the human eye, is being addressed by the Bionic Eye. This dissertation focuses on the study of a photoferroelectric micro photo detector as an implantable retinal prosthesis for vision restoration in patients with above disorders. This implant uses an electrical signal to trigger the appropriate ocular cells of the vision system without resorting to wiring or electrode implantation. The research work includes fabrication of photoferroelectric thin film micro detectors, characterization of these photoferroelectric micro devices as photovoltaic cells, and Finite Element Method (FEM) modeling of the photoferroelectrics and their device-neuron interface. A ferroelectric micro detector exhibiting the photovoltaic effect (PVE) directly adds electrical potential to the neuron membrane outer wall at the focal adhesion regions. The electrical potential then generates a retinal cell membrane potential deflection through a newly developed Direct-Electric-Field-Coupling (DEFC) model. This model is quite different from the traditional electric current model because instead of current directly working on the cell membrane, the PVE current is used to generate a localized high electric potential in the focal adhesion region by working together with the anisotropic high internal impedance of ferroelectric thin films. General electrodes and silicon photodetectors do not have such anisotropy and high impedance, and thus they cannot generate DEFC. This mechanism investigation is very valuable, because it clearly shows that our artificial retina works in a way that is totally different from the traditional current stimulation methods.

  11. Third Conference on Artificial Intelligence for Space Applications, part 2

    NASA Technical Reports Server (NTRS)

    Denton, Judith S. (Compiler); Freeman, Michael S. (Compiler); Vereen, Mary (Compiler)

    1988-01-01

    Topics relative to the application of artificial intelligence to space operations are discussed. New technologies for space station automation, design data capture, computer vision, neural nets, automatic programming, and real time applications are discussed.

  12. Eliminating chromatic aberration of lens and recognition of thermal images with artificial intelligence applications

    NASA Astrophysics Data System (ADS)

    Fang, Yi-Chin; Wu, Bo-Wen; Lin, Wei-Tang; Jon, Jen-Liung

    2007-11-01

    Resolution and color are two main directions for measuring optical digital image, but it will be a hard work to integral improve the image quality of optical system, because there are many limits such as size, materials and environment of optical system design. Therefore, it is important to let blurred images as aberrations and noises or due to the characteristics of human vision as far distance and small targets to raise the capability of image recognition with artificial intelligence such as genetic algorithm and neural network in the condition that decreasing color aberration of optical system and not to increase complex calculation in the image processes. This study could achieve the goal of integral, economically and effectively to improve recognition and classification in low quality image from optical system and environment.

  13. IEEE 1982. Proceedings of the international conference on cybernetics and society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-01-01

    The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.

  14. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  15. Modeling selective attention using a neuromorphic analog VLSI device.

    PubMed

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  16. A Two-Century-Old Vision for the Future.

    ERIC Educational Resources Information Center

    Fuchs, Ira H.

    1988-01-01

    Discusses the necessity of acquiring and developing technological advances for use in the classroom to provide a vision for the future. Topics discussed include microcomputers; workstations; software; networks; cooperative endeavors in industry and academia; artificial intelligence; and the necessity for financial support. (LRW)

  17. Robot vision system programmed in Prolog

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Hack, Ralf

    1995-10-01

    This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)

  18. The Artificial Silicon Retina in Retinitis Pigmentosa Patients (An American Ophthalmological Association Thesis)

    PubMed Central

    Chow, Alan Y.; Bittner, Ava K.; Pardue, Machelle T.

    2010-01-01

    Purpose: In a published pilot study, a light-activated microphotodiode-array chip, the artificial silicon retina (ASR), was implanted subretinally in 6 retinitis pigmentosa (RP) patients for up to 18 months. The ASR electrically induced retinal neurotrophic rescue of visual acuity, contrast, and color perception and raised several questions: (1) Would neurotrophic effects develop and persist in additionally implanted RP patients? (2) Could vision in these patients be reliably assessed? (3) Would the ASR be tolerated and function for extended periods? Methods: Four additional RP patients were implanted and observed along with the 6 pilot patients. Of the 10 patients, 6 had vision levels that allowed for more standardized testing and were followed up for 7+ years utilizing ETDRS charts and a 4-alternative forced choice (AFC) Chow grating acuity test (CGAT). A 10-AFC Chow color test (CCT) extended the range of color vision testing. Histologic examination of the eyes of one patient, who died of an unrelated event, was performed. Results: The ASR was well tolerated, and improvement and/or slowing of vision loss occurred in all 6 patients. CGAT extended low vision acuity testing by logMAR 0.6. CCT expanded the range of color vision testing and correlated well with PV-16 (r = 0.77). An ASR recovered from a patient 5 years after implantation showed minor disruption and excellent electrical function. Conclusion: ASR-implanted RP patients experienced prolonged neurotrophic rescue of vision. CGAT and CCT extended the range of acuity and color vision testing in low vision patients. ASR implantation may improve and prolong vision in RP patients. PMID:21212852

  19. Recent advances in the development and transfer of machine vision technologies for space

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P.; Pendleton, Thomas

    1991-01-01

    Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.

  20. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.

  1. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses.

    PubMed

    Fink, Wolfgang; You, Cindy X; Tarbell, Mark A

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (microAVS(2)) for real-time image processing. Truly standalone, microAVS(2) is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on microAVS(2) operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. MiccroAVS(2) imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, microAVS(2) affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, microAVS(2) can easily be reconfigured for other prosthetic systems. Testing of microAVS(2) with actual retinal implant carriers is envisioned in the near future.

  2. Modelling and representation issues in automated feature extraction from aerial and satellite images

    NASA Astrophysics Data System (ADS)

    Sowmya, Arcot; Trinder, John

    New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.

  3. Vision rehabilitation in the case of blindness.

    PubMed

    Veraart, Claude; Duret, Florence; Brelén, Marten; Oozeer, Medhy; Delbeke, Jean

    2004-09-01

    This article examines the various vision rehabilitation procedures that are available for early and late blindness. Depending on the pathology involved, several vision rehabilitation procedures exist, or are in development. Visual aids are available for low vision individuals, as are sensory aids for blind persons. Most noninvasive sensory substitution prostheses as well as implanted visual prostheses in development are reviewed. Issues dealing with vision rehabilitation are also discussed, such as problems of biocompatibility, electrical safety, psychosocial aspects, and ethics. Basic studies devoted to vision rehabilitation such as simulation in mathematical models and simulation of artificial vision are also presented. Finally, the importance of accurate rehabilitation assessment is addressed, and tentative market figures are given.

  4. Learning from vision-to-touch is different than learning from touch-to-vision.

    PubMed

    Wismeijer, Dagmar A; Gegenfurtner, Karl R; Drewing, Knut

    2012-01-01

    We studied whether vision can teach touch to the same extent as touch seems to teach vision. In a 2 × 2 between-participants learning study, we artificially correlated visual gloss cues with haptic compliance cues. In two "natural" tasks, we tested whether visual gloss estimations have an influence on haptic estimations of softness and vice versa. In two "novel" tasks, in which participants were either asked to haptically judge glossiness or to visually judge softness, we investigated how perceptual estimates transfer from one sense to the other. Our results showed that vision does not teach touch as efficient as touch seems to teach vision.

  5. Real time AI expert system for robotic applications

    NASA Technical Reports Server (NTRS)

    Follin, John F.

    1987-01-01

    A computer controlled multi-robot process cell to demonstrate advanced technologies for the demilitarization of obsolete chemical munitions was developed. The methods through which the vision system and other sensory inputs were used by the artificial intelligence to provide the information required to direct the robots to complete the desired task are discussed. The mechanisms that the expert system uses to solve problems (goals), the different rule data base, and the methods for adapting this control system to any device that can be controlled or programmed through a high level computer interface are discussed.

  6. Technology Support for Discussion Based Learning: From Computer Supported Collaborative Learning to the Future of Massive Open Online Courses

    ERIC Educational Resources Information Center

    Rosé, Carolyn Penstein; Ferschke, Oliver

    2016-01-01

    This article offers a vision for technology supported collaborative and discussion-based learning at scale. It begins with historical work in the area of tutorial dialogue systems. It traces the history of that area of the field of Artificial Intelligence in Education as it has made an impact on the field of Computer-Supported Collaborative…

  7. Social energy: mining energy from the society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jun Jason; Gao, David Wenzhong; Zhang, Yingchen

    The inherent nature of energy, i.e., physicality, sociality and informatization, implies the inevitable and intensive interaction between energy systems and social systems. From this perspective, we define 'social energy' as a complex sociotechnical system of energy systems, social systems and the derived artificial virtual systems which characterize the intense intersystem and intra-system interactions. The recent advancement in intelligent technology, including artificial intelligence and machine learning technologies, sensing and communication in Internet of Things technologies, and massive high performance computing and extreme-scale data analytics technologies, enables the possibility of substantial advancement in socio-technical system optimization, scheduling, control and management. In thismore » paper, we provide a discussion on the nature of energy, and then propose the concept and intention of social energy systems for electrical power. A general methodology of establishing and investigating social energy is proposed, which is based on the ACP approach, i.e., 'artificial systems' (A), 'computational experiments' (C) and 'parallel execution' (P), and parallel system methodology. A case study on the University of Denver (DU) campus grid is provided and studied to demonstrate the social energy concept. In the concluding remarks, we discuss the technical pathway, in both social and nature sciences, to social energy, and our vision on its future.« less

  8. Blind subjects implanted with the Argus II retinal prosthesis are able to improve performance in a spatial-motor task

    PubMed Central

    Ahuja, A K; Dorn, J D; Caspi, A; McMahon, M J; Dagnelie, G; daCruz, L; Stanga, P; Humayun, M S; Greenberg, R J

    2012-01-01

    Background/aims To determine to what extent subjects implanted with the Argus II retinal prosthesis can improve performance compared with residual native vision in a spatial-motor task. Methods High-contrast square stimuli (5.85 cm sides) were displayed in random locations on a 19″ (48.3 cm) touch screen monitor located 12″ (30.5 cm) in front of the subject. Subjects were instructed to locate and touch the square centre with the system on and then off (40 trials each). The coordinates of the square centre and location touched were recorded. Results Ninety-six percent (26/27) of subjects showed a significant improvement in accuracy and 93% (25/27) show a significant improvement in repeatability with the system on compared with off (p<0.05, Student t test). A group of five subjects that had both accuracy and repeatability values <250 pixels (7.4 cm) with the system off (ie, using only their residual vision) was significantly more accurate and repeatable than the remainder of the cohort (p<0.01). Of this group, four subjects showed a significant improvement in both accuracy and repeatability with the system on. Conclusion In a study on the largest cohort of visual prosthesis recipients to date, we found that artificial vision augments information from existing vision in a spatial-motor task. Clinical trials registry no NCT00407602. PMID:20881025

  9. Reading with peripheral vision: a comparison of reading dynamic scrolling and static text with a simulated central scotoma.

    PubMed

    Harvey, Hannah; Walker, Robin

    2014-05-01

    Horizontally scrolling text is, in theory, ideally suited to enhance viewing strategies recommended to improve reading performance under conditions of central vision loss such as macular disease, although it is largely unproven in this regard. This study investigated if the use of scrolling text produced an observable improvement in reading performed under conditions of eccentric viewing in an artificial scotoma paradigm. Participants (n=17) read scrolling and static text with a central artificial scotoma controlled by an eye-tracker. There was an improvement in measures of reading accuracy, and adherence to eccentric viewing strategies with scrolling, compared to static, text. These findings illustrate the potential benefits of scrolling text as a potential reading aid for those with central vision loss. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Aggregating and Communicating Uncertainty.

    DTIC Science & Technology

    1980-04-01

    wholes, the organic, inclusive struc- tures of events. In artificial intelligence applications, these wholes are sometimes called frames, scripts, or...Here, we will use a somewhat more artificial example. It is well known that many of the more hawkish forces within the Soviet Union believe that a...the actual figures; another nation, which wants to give an exaggerated vision of its capabilities, may provide artificially inflated figures. DE

  11. Calibrators measurement system for headlamp tester of motor vehicle base on machine vision

    NASA Astrophysics Data System (ADS)

    Pan, Yue; Zhang, Fan; Xu, Xi-ping; Zheng, Zhe

    2014-09-01

    With the development of photoelectric detection technology, machine vision has a wider use in the field of industry. The paper mainly introduces auto lamps tester calibrator measuring system, of which CCD image sampling system is the core. Also, it shows the measuring principle of optical axial angle and light intensity, and proves the linear relationship between calibrator's facula illumination and image plane illumination. The paper provides an important specification of CCD imaging system. Image processing by MATLAB can get flare's geometric midpoint and average gray level. By fitting the statistics via the method of the least square, we can get regression equation of illumination and gray level. It analyzes the error of experimental result of measurement system, and gives the standard uncertainty of synthesis and the resource of optical axial angle. Optical axial angle's average measuring accuracy is controlled within 40''. The whole testing process uses digital means instead of artificial factors, which has higher accuracy, more repeatability and better mentality than any other measuring systems.

  12. Computer vision system for egg volume prediction using backpropagation neural network

    NASA Astrophysics Data System (ADS)

    Siswantoro, J.; Hilman, M. Y.; Widiasri, M.

    2017-11-01

    Volume is one of considered aspects in egg sorting process. A rapid and accurate volume measurement method is needed to develop an egg sorting system. Computer vision system (CVS) provides a promising solution for volume measurement problem. Artificial neural network (ANN) has been used to predict the volume of egg in several CVSs. However, volume prediction from ANN could have less accuracy due to inappropriate input features or inappropriate ANN structure. This paper proposes a CVS for predicting the volume of egg using ANN. The CVS acquired an image of egg from top view and then processed the image to extract its 1D and 2 D size features. The features were used as input for ANN in predicting the volume of egg. The experiment results show that the proposed CSV can predict the volume of egg with a good accuracy and less computation time.

  13. Microoptical artificial compound eyes: from design to experimental verification of two different concepts

    NASA Astrophysics Data System (ADS)

    Duparré, Jacques; Wippermann, Frank; Dannberg, Peter; Schreiber, Peter; Bräuer, Andreas; Völkel, Reinhard; Scharf, Toralf

    2005-09-01

    Two novel objective types on the basis of artificial compound eyes are examined. Both imaging systems are well suited for fabrication using microoptics technology due to the small required lens sags. In the apposition optics a microlens array (MLA) and a photo detector array of different pitch in its focal plane are applied. The image reconstruction is based on moire magnification. Several generations of demonstrators of this objective type are manufactured by photo lithographic processes. This includes a system with opaque walls between adjacent channels and an objective which is directly applied onto a CMOS detector array. The cluster eye approach, which is based on a mixture of superposition compound eyes and the vision system of jumping spiders, produces a regular image. Here, three microlens arrays of different pitch form arrays of Keplerian microtelescopes with tilted optical axes, including a field lens. The microlens arrays of this demonstrator are also fabricated using microoptics technology, aperture arrays are applied. Subsequently the lens arrays are stacked to the overall microoptical system on wafer scale. Both fabricated types of artificial compound eye imaging systems are experimentally characterized with respect to resolution, sensitivity and cross talk between adjacent channels. Captured images are presented.

  14. Optoelectronic vision

    NASA Astrophysics Data System (ADS)

    Ren, Chunye; Parel, Jean-Marie A.

    1993-06-01

    Scientists have searched every discipline to find effective methods of treating blindness, such as using aids based on conversion of the optical image, to auditory or tactile stimuli. However, the limited performance of such equipment and difficulties in training patients have seriously hampered practical applications. A great edification has been given by the discovery of Foerster (1929) and Krause & Schum (1931), who found that the electrical stimulation of the visual cortex evokes the perception of a small spot of light called `phosphene' in both blind and sighted subjects. According to this principle, it is possible to invite artificial vision by using stimulation with electrodes placed on the vision neural system, thereby developing a prosthesis for the blind that might be of value in reading and mobility. In fact, a number of investigators have already exploited this phenomena to produce a functional visual prosthesis, bringing about great advances in this area.

  15. Applications of wavelets in interferometry and artificial vision

    NASA Astrophysics Data System (ADS)

    Escalona Z., Rafael A.

    2001-08-01

    In this paper we present a different point of view of phase measurements performed in interferometry, image processing and intelligent vision using Wavelet Transform. In standard and white-light interferometry, the phase function is retrieved by using phase-shifting, Fourier-Transform, cosinus-inversion and other known algorithms. Our novel technique presented here is faster, robust and shows excellent accuracy in phase determinations. Finally, in our second application, fringes are no more generate by some light interaction but result from the observation of adapted strip set patterns directly printed on the target of interest. The moving target is simply observed by a conventional vision system and usual phase computation algorithms are adapted to an image processing by wavelet transform, in order to sense target position and displacements with a high accuracy. In general, we have determined that wavelet transform presents properties of robustness, relative speed of calculus and very high accuracy in phase computations.

  16. Harnessing vision for computation.

    PubMed

    Changizi, Mark

    2008-01-01

    Might it be possible to harness the visual system to carry out artificial computations, somewhat akin to how DNA has been harnessed to carry out computation? I provide the beginnings of a research programme attempting to do this. In particular, new techniques are described for building 'visual circuits' (or 'visual software') using wire, NOT, OR, and AND gates in a visual 6modality such that our visual system acts as 'visual hardware' computing the circuit, and generating a resultant perception which is the output.

  17. Neural network expert system for X-ray analysis of welded joints

    NASA Astrophysics Data System (ADS)

    Kozlov, V. V.; Lapik, N. V.; Popova, N. V.

    2018-03-01

    The use of intelligent technologies for the automated analysis of product quality is one of the main trends in modern machine building. At the same time, rapid development in various spheres of human activity is experienced by methods associated with the use of artificial neural networks, as the basis for building automated intelligent diagnostic systems. Technologies of machine vision allow one to effectively detect the presence of certain regularities in the analyzed designation, including defects of welded joints according to radiography data.

  18. Artificial Muscles Based on Electroactive Polymers as an Enabling Tool in Biomimetics

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Y.

    2007-01-01

    Evolution has resolved many of nature's challenges leading to working and lasting solutions that employ principles of physics, chemistry, mechanical engineering, materials science, and many other fields of science and engineering. Nature's inventions have always inspired human achievements leading to effective materials, structures, tools, mechanisms, processes, algorithms, methods, systems, and many other benefits. Some of the technologies that have emerged include artificial intelligence, artificial vision, and artificial muscles, where the latter is the moniker for electroactive polymers (EAPs). To take advantage of these materials and make them practical actuators, efforts are made worldwide to develop capabilities that are critical to the field infrastructure. Researchers are developing analytical model and comprehensive understanding of EAP materials response mechanism as well as effective processing and characterization techniques. The field is still in its emerging state and robust materials are still not readily available; however, in recent years, significant progress has been made and commercial products have already started to appear. In the current paper, the state-of-the-art and challenges to artificial muscles as well as their potential application to biomimetic mechanisms and devices are described and discussed.

  19. Learning from vision-to-touch is different than learning from touch-to-vision

    PubMed Central

    Wismeijer, Dagmar A.; Gegenfurtner, Karl R.; Drewing, Knut

    2012-01-01

    We studied whether vision can teach touch to the same extent as touch seems to teach vision. In a 2 × 2 between-participants learning study, we artificially correlated visual gloss cues with haptic compliance cues. In two “natural” tasks, we tested whether visual gloss estimations have an influence on haptic estimations of softness and vice versa. In two “novel” tasks, in which participants were either asked to haptically judge glossiness or to visually judge softness, we investigated how perceptual estimates transfer from one sense to the other. Our results showed that vision does not teach touch as efficient as touch seems to teach vision. PMID:23181012

  20. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network.

    PubMed

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-04-13

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are 'traffic light ahead' or 'pedestrian crossing' indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications.

  1. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network

    PubMed Central

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-01-01

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are ‘traffic light ahead’ or ‘pedestrian crossing’ indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications. PMID:28406471

  2. Simulating the Mind

    NASA Astrophysics Data System (ADS)

    Dietrich, Dietmar; Fodor, Georg; Zucker, Gerhard; Bruckner, Dietmar

    The approach to developing models described within the following chapters breaks with some of the previously used approaches in Artificial Intelligence. This is the first attempt to use methods from psychoanalysis organized in a strictly topdown design method in order to take an important step towards the creation of intelligent systems. Hence, the vision and the research hypothesis are described in the beginning and will hopefully prove to have sufficient grounds for this approach.

  3. Pattern recognition neural-net by spatial mapping of biology visual field

    NASA Astrophysics Data System (ADS)

    Lin, Xin; Mori, Masahiko

    2000-05-01

    The method of spatial mapping in biology vision field is applied to artificial neural networks for pattern recognition. By the coordinate transform that is called the complex-logarithm mapping and Fourier transform, the input images are transformed into scale- rotation- and shift- invariant patterns, and then fed into a multilayer neural network for learning and recognition. The results of computer simulation and an optical experimental system are described.

  4. First Results of an “Artificial Retina” Processor Prototype

    DOE PAGES

    Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro; ...

    2016-11-15

    We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. Also, the prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHzmore » crossing rate.« less

  5. First Results of an “Artificial Retina” Processor Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro

    We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. Also, the prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHzmore » crossing rate.« less

  6. The Case for Artificial Intelligence in Medicine

    PubMed Central

    Reggia, James A.

    1983-01-01

    Current artificial intelligence (AI) technology can be viewed as producing “systematic artifacts” onto which we project an interpretation of intelligent behavior. One major benefit this technology could bring to medicine is help with handling the tremendous and growing volume of medical knowledge. The reader is led to a vision of the medical library of tomorrow, an interactive, artificially intelligent knowledge source that is fully and directly integrated with daily patient care.

  7. Microsoft kinect-based artificial perception system for control of functional electrical stimulation assisted grasping.

    PubMed

    Strbac, Matija; Kočović, Slobodan; Marković, Marko; Popović, Dejan B

    2014-01-01

    We present a computer vision algorithm that incorporates a heuristic model which mimics a biological control system for the estimation of control signals used in functional electrical stimulation (FES) assisted grasping. The developed processing software acquires the data from Microsoft Kinect camera and implements real-time hand tracking and object analysis. This information can be used to identify temporal synchrony and spatial synergies modalities for FES control. Therefore, the algorithm acts as artificial perception which mimics human visual perception by identifying the position and shape of the object with respect to the position of the hand in real time during the planning phase of the grasp. This artificial perception used within the heuristically developed model allows selection of the appropriate grasp and prehension. The experiments demonstrate that correct grasp modality was selected in more than 90% of tested scenarios/objects. The system is portable, and the components are low in cost and robust; hence, it can be used for the FES in clinical or even home environment. The main application of the system is envisioned for functional electrical therapy, that is, intensive exercise assisted with FES.

  8. Microsoft Kinect-Based Artificial Perception System for Control of Functional Electrical Stimulation Assisted Grasping

    PubMed Central

    Kočović, Slobodan; Popović, Dejan B.

    2014-01-01

    We present a computer vision algorithm that incorporates a heuristic model which mimics a biological control system for the estimation of control signals used in functional electrical stimulation (FES) assisted grasping. The developed processing software acquires the data from Microsoft Kinect camera and implements real-time hand tracking and object analysis. This information can be used to identify temporal synchrony and spatial synergies modalities for FES control. Therefore, the algorithm acts as artificial perception which mimics human visual perception by identifying the position and shape of the object with respect to the position of the hand in real time during the planning phase of the grasp. This artificial perception used within the heuristically developed model allows selection of the appropriate grasp and prehension. The experiments demonstrate that correct grasp modality was selected in more than 90% of tested scenarios/objects. The system is portable, and the components are low in cost and robust; hence, it can be used for the FES in clinical or even home environment. The main application of the system is envisioned for functional electrical therapy, that is, intensive exercise assisted with FES. PMID:25202707

  9. Multi-modal low cost mobile indoor surveillance system on the Robust Artificial Intelligence-based Defense Electro Robot (RAIDER)

    NASA Astrophysics Data System (ADS)

    Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.

  10. Advanced biologically plausible algorithms for low-level image processing

    NASA Astrophysics Data System (ADS)

    Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan

    1999-08-01

    At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.

  11. Artificial fish skin of self-powered micro-electromechanical systems hair cells for sensing hydrodynamic flow phenomena.

    PubMed

    Asadnia, Mohsen; Kottapalli, Ajay Giri Prakash; Miao, Jianmin; Warkiani, Majid Ebrahimi; Triantafyllou, Michael S

    2015-10-06

    Using biological sensors, aquatic animals like fishes are capable of performing impressive behaviours such as super-manoeuvrability, hydrodynamic flow 'vision' and object localization with a success unmatched by human-engineered technologies. Inspired by the multiple functionalities of the ubiquitous lateral-line sensors of fishes, we developed flexible and surface-mountable arrays of micro-electromechanical systems (MEMS) artificial hair cell flow sensors. This paper reports the development of the MEMS artificial versions of superficial and canal neuromasts and experimental characterization of their unique flow-sensing roles. Our MEMS flow sensors feature a stereolithographically fabricated polymer hair cell mounted on Pb(Zr(0.52)Ti(0.48))O3 micro-diaphragm with floating bottom electrode. Canal-inspired versions are developed by mounting a polymer canal with pores that guide external flows to the hair cells embedded in the canal. Experimental results conducted employing our MEMS artificial superficial neuromasts (SNs) demonstrated a high sensitivity and very low threshold detection limit of 22 mV/(mm s(-1)) and 8.2 µm s(-1), respectively, for an oscillating dipole stimulus vibrating at 35 Hz. Flexible arrays of such superficial sensors were demonstrated to localize an underwater dipole stimulus. Comparative experimental studies revealed a high-pass filtering nature of the canal encapsulated sensors with a cut-off frequency of 10 Hz and a flat frequency response of artificial SNs. Flexible arrays of self-powered, miniaturized, light-weight, low-cost and robust artificial lateral-line systems could enhance the capabilities of underwater vehicles. © 2015 The Author(s).

  12. The 1990 Goddard Conference on Space Applications of Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Rash, James L. (Editor)

    1990-01-01

    The papers presented at the 1990 Goddard Conference on Space Applications of Artificial Intelligence are given. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The proceedings fall into the following areas: Planning and Scheduling, Fault Monitoring/Diagnosis, Image Processing and Machine Vision, Robotics/Intelligent Control, Development Methodologies, Information Management, and Knowledge Acquisition.

  13. Vision Guided Intelligent Robot Design And Experiments

    NASA Astrophysics Data System (ADS)

    Slutzky, G. D.; Hall, E. L.

    1988-02-01

    The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.

  14. Stereo vision with distance and gradient recognition

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  15. Science and Technology of Bio-Inert Thin Films as Hermetic-Encapsulating Coatings for Implantable Biomedical Devices: Application to Implantable Microchip in the Eye for the Artificial Retina

    NASA Astrophysics Data System (ADS)

    Auciello, Orlando; Shi, Bing

    Extensive research has been devoted to the development of neuron prostheses and hybrid bionic systems to establish links between the nervous system and electronic or robotic prostheses with the main focus of restoring motor and sensory functions in blind patients. Artificial retinas, one type of neural prostheses we are currently working on, aim to restore some vision in blind patients caused by retinitis picmentosa or macular degeneration, and in the future to restore vision at the level of face recognition, if not more. Currently there is no hermetic microchip-size coating that provides a reliable, long-term (years) performance as encapsulating coating for the artificial retina Si microchip to be implanted inside the eye. This chapter focuses on the critical topics relevant to the development of a robust, long-term artificial retina device, namely the science and technology of hermetic bio-inert encapsulating coatings to protect a Si microchip implanted in the human eye from being attacked by chemicals existing in the eye's saline environment. The work discussed in this chapter is related to the development of a novel ultrananocrystalline diamond (UNCD) hermetic coating, which exhibited no degradation in rabbit eyes. The material synthesis, characterization, and electrochemical properties of these hermetic coatings are reviewed for application as encapsulating coating for the artificial retinal microchips implantable inside the human eye. Our work has shown that UNCD coatings may provide a reliable hermetic bio-inert coating technology for encapsulation of Si microchips implantable in the eye specifically and in the human body in general. Electrochemical tests of the UNCD films grown under CH4/Ar/H2 (1%) plasma exhibit the lowest leakage currents (˜7 × 10-7 A/cm2) in a saline solution simulating the eye environment. This leakage is incompatible with the functionality of the first-generation artificial retinal microchip. However, the growth of UNCD on top of the Si microchip passivated by a silicon nitride layer or the oxide layers is also under investigation in our group as introduced in this chapter. The electrochemically induced leakage will be reduced by at least one to three orders of magnitude to the range of 10-10 A/cm2, which is compatible with reliable, long-term implants.

  16. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  17. Development of Moire machine vision

    NASA Technical Reports Server (NTRS)

    Harding, Kevin G.

    1987-01-01

    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.

  18. How do plants see the world? - UV imaging with a TiO2 nanowire array by artificial photosynthesis.

    PubMed

    Kang, Ji-Hoon; Leportier, Thibault; Park, Min-Chul; Han, Sung Gyu; Song, Jin-Dong; Ju, Hyunsu; Hwang, Yun Jeong; Ju, Byeong-Kwon; Poon, Ting-Chung

    2018-05-10

    The concept of plant vision refers to the fact that plants are receptive to their visual environment, although the mechanism involved is quite distinct from the human visual system. The mechanism in plants is not well understood and has yet to be fully investigated. In this work, we have exploited the properties of TiO2 nanowires as a UV sensor to simulate the phenomenon of photosynthesis in order to come one step closer to understanding how plants see the world. To the best of our knowledge, this study is the first approach to emulate and depict plant vision. We have emulated the visual map perceived by plants with a single-pixel imaging system combined with a mechanical scanner. The image acquisition has been demonstrated for several electrolyte environments, in both transmissive and reflective configurations, in order to explore the different conditions in which plants perceive light.

  19. Development of Moire machine vision

    NASA Astrophysics Data System (ADS)

    Harding, Kevin G.

    1987-10-01

    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.

  20. System of fabricating a flexible electrode array

    DOEpatents

    Krulevitch, Peter; Polla, Dennis L.; Maghribi, Mariam N.; Hamilton, Julie; Humayun, Mark S.; Weiland, James D.

    2010-10-12

    An image is captured or otherwise converted into a signal in an artificial vision system. The signal is transmitted to the retina utilizing an implant. The implant consists of a polymer substrate made of a compliant material such as poly(dimethylsiloxane) or PDMS. The polymer substrate is conformable to the shape of the retina. Electrodes and conductive leads are embedded in the polymer substrate. The conductive leads and the electrodes transmit the signal representing the image to the cells in the retina. The signal representing the image stimulates cells in the retina.

  1. System of fabricating a flexible electrode array

    DOEpatents

    Krulevitch, Peter [Pleasanton, CA; Polla, Dennis L [Roseville, MN; Maghribi, Mariam N [Davis, CA; Hamilton, Julie [Tracy, CA; Humayun, Mark S [La Canada, CA; Weiland, James D [Valencia, CA

    2012-01-28

    An image is captured or otherwise converted into a signal in an artificial vision system. The signal is transmitted to the retina utilizing an implant. The implant consists of a polymer substrate made of a compliant material such as poly(dimethylsiloxane) or PDMS. The polymer substrate is conformable to the shape of the retina. Electrodes and conductive leads are embedded in the polymer substrate. The conductive leads and the electrodes transmit the signal representing the image to the cells in the retina. The signal representing the image stimulates cells in the retina.

  2. Biologically inspired robots as artificial inspectors

    NASA Astrophysics Data System (ADS)

    Bar-Cohen, Yoseph

    2002-06-01

    Imagine an inspector conducting an NDE on an aircraft where you notice something is different about him - he is not real but rather he is a robot. Your first reaction would probably be to say 'it's unbelievable but he looks real' just as you would react to an artificial flower that is a good imitation. This science fiction scenario could become a reality at the trend in the development of biologically inspired technologies, and terms like artificial intelligence, artificial muscles, artificial vision and numerous others are increasingly becoming common engineering tools. For many years, the trend has been to automate processes in order to increase the efficiency of performing redundant tasks where various systems have been developed to deal with specific production line requirements. Realizing that some parts are too complex or delicate to handle in small quantities with a simple automatic system, robotic mechanisms were developed. Aircraft inspection has benefitted from this evolving technology where manipulators and crawlers are developed for rapid and reliable inspection. Advancement in robotics towards making them autonomous and possibly look like human, can potentially address the need to inspect structures that are beyond the capability of today's technology with configuration that are not predetermined. The operation of these robots may take place at harsh or hazardous environments that are too dangerous for human presence. Making such robots is becoming increasingly feasible and in this paper the state of the art will be reviewed.

  3. Determining Attitude of Object from Needle Map Using Extended Gaussian Image.

    DTIC Science & Technology

    1983-04-01

    D Images," Artificial Intellignece , Vol 17, August, 1981, 285-349. [6] Marr, D., Vision W.H. Freeman, San Francisco, 1982. [7] Brady, M...Witkin, A.P. "Recovering Surface Shape and Orientation from texture," Artificial Intellignec , Vol. 17, 1982, 17-47. [22] Horn, B.K.P., "SEQUINS and...AD-R131 617 DETERMINING ATTITUDE OF OBJECT FROM NEEDLE MAP USING I/i EXTENDED GAUSSIAN IMRGE(U) MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL

  4. Augmented reality and haptic interfaces for robot-assisted surgery.

    PubMed

    Yamamoto, Tomonori; Abolhassani, Niki; Jung, Sung; Okamura, Allison M; Judkins, Timothy N

    2012-03-01

    Current teleoperated robot-assisted minimally invasive surgical systems do not take full advantage of the potential performance enhancements offered by various forms of haptic feedback to the surgeon. Direct and graphical haptic feedback systems can be integrated with vision and robot control systems in order to provide haptic feedback to improve safety and tissue mechanical property identification. An interoperable interface for teleoperated robot-assisted minimally invasive surgery was developed to provide haptic feedback and augmented visual feedback using three-dimensional (3D) graphical overlays. The software framework consists of control and command software, robot plug-ins, image processing plug-ins and 3D surface reconstructions. The feasibility of the interface was demonstrated in two tasks performed with artificial tissue: palpation to detect hard lumps and surface tracing, using vision-based forbidden-region virtual fixtures to prevent the patient-side manipulator from entering unwanted regions of the workspace. The interoperable interface enables fast development and successful implementation of effective haptic feedback methods in teleoperation. Copyright © 2011 John Wiley & Sons, Ltd.

  5. Space applications of artificial intelligence; 1990 Goddard Conference, Greenbelt, MD, May 1, 2, 1990, Selected Papers

    NASA Technical Reports Server (NTRS)

    Rash, James L. (Editor)

    1990-01-01

    The papers presented at the 1990 Goddard Conference on Space Applications of Artificial Intelligence are given. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The proceedings fall into the following areas: Planning and Scheduling, Fault Monitoring/Diagnosis, Image Processing and Machine Vision, Robotics/Intelligent Control, Development Methodologies, Information Management, and Knowledge Acquisition.

  6. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems

    PubMed Central

    Indiveri, Giacomo

    2008-01-01

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention. PMID:27873818

  7. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems.

    PubMed

    Indiveri, Giacomo

    2008-09-03

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  8. Infrared machine vision system for the automatic detection of olive fruit quality.

    PubMed

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. Crown Copyright © 2013 Published by Elsevier B.V. All rights reserved.

  9. Maze learning by a hybrid brain-computer system

    NASA Astrophysics Data System (ADS)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-01

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  10. Maze learning by a hybrid brain-computer system.

    PubMed

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-13

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  11. Maze learning by a hybrid brain-computer system

    PubMed Central

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-01-01

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation. PMID:27619326

  12. Fast and robust generation of feature maps for region-based visual attention.

    PubMed

    Aziz, Muhammad Zaheer; Mertsching, Bärbel

    2008-05-01

    Visual attention is one of the important phenomena in biological vision which can be followed to achieve more efficiency, intelligence, and robustness in artificial vision systems. This paper investigates a region-based approach that performs pixel clustering prior to the processes of attention in contrast to late clustering as done by contemporary methods. The foundation steps of feature map construction for the region-based attention model are proposed here. The color contrast map is generated based upon the extended findings from the color theory, the symmetry map is constructed using a novel scanning-based method, and a new algorithm is proposed to compute a size contrast map as a formal feature channel. Eccentricity and orientation are computed using the moments of obtained regions and then saliency is evaluated using the rarity criteria. The efficient design of the proposed algorithms allows incorporating five feature channels while maintaining a processing rate of multiple frames per second. Another salient advantage over the existing techniques is the reusability of the salient regions in the high-level machine vision procedures due to preservation of their shapes and precise locations. The results indicate that the proposed model has the potential to efficiently integrate the phenomenon of attention into the main stream of machine vision and systems with restricted computing resources such as mobile robots can benefit from its advantages.

  13. Bio-inspired approach for intelligent unattended ground sensors

    NASA Astrophysics Data System (ADS)

    Hueber, Nicolas; Raymond, Pierre; Hennequin, Christophe; Pichler, Alexander; Perrot, Maxime; Voisin, Philippe; Moeglin, Jean-Pierre

    2015-05-01

    Improving the surveillance capacity over wide zones requires a set of smart battery-powered Unattended Ground Sensors capable of issuing an alarm to a decision-making center. Only high-level information has to be sent when a relevant suspicious situation occurs. In this paper we propose an innovative bio-inspired approach that mimics the human bi-modal vision mechanism and the parallel processing ability of the human brain. The designed prototype exploits two levels of analysis: a low-level panoramic motion analysis, the peripheral vision, and a high-level event-focused analysis, the foveal vision. By tracking moving objects and fusing multiple criteria (size, speed, trajectory, etc.), the peripheral vision module acts as a fast relevant event detector. The foveal vision module focuses on the detected events to extract more detailed features (texture, color, shape, etc.) in order to improve the recognition efficiency. The implemented recognition core is able to acquire human knowledge and to classify in real-time a huge amount of heterogeneous data thanks to its natively parallel hardware structure. This UGS prototype validates our system approach under laboratory tests. The peripheral analysis module demonstrates a low false alarm rate whereas the foveal vision correctly focuses on the detected events. A parallel FPGA implementation of the recognition core succeeds in fulfilling the embedded application requirements. These results are paving the way of future reconfigurable virtual field agents. By locally processing the data and sending only high-level information, their energy requirements and electromagnetic signature are optimized. Moreover, the embedded Artificial Intelligence core enables these bio-inspired systems to recognize and learn new significant events. By duplicating human expertise in potentially hazardous places, our miniature visual event detector will allow early warning and contribute to better human decision making.

  14. Using an Augmented Reality Device as a Distance-based Vision Aid-Promise and Limitations.

    PubMed

    Kinateder, Max; Gualtieri, Justin; Dunn, Matt J; Jarosz, Wojciech; Yang, Xing-Dong; Cooper, Emily A

    2018-06-06

    For people with limited vision, wearable displays hold the potential to digitally enhance visual function. As these display technologies advance, it is important to understand their promise and limitations as vision aids. The aim of this study was to test the potential of a consumer augmented reality (AR) device for improving the functional vision of people with near-complete vision loss. An AR application that translates spatial information into high-contrast visual patterns was developed. Two experiments assessed the efficacy of the application to improve vision: an exploratory study with four visually impaired participants and a main controlled study with participants with simulated vision loss (n = 48). In both studies, performance was tested on a range of visual tasks (identifying the location, pose and gesture of a person, identifying objects, and moving around in an unfamiliar space). Participants' accuracy and confidence were compared on these tasks with and without augmented vision, as well as their subjective responses about ease of mobility. In the main study, the AR application was associated with substantially improved accuracy and confidence in object recognition (all P < .001) and to a lesser degree in gesture recognition (P < .05). There was no significant change in performance on identifying body poses or in subjective assessments of mobility, as compared with a control group. Consumer AR devices may soon be able to support applications that improve the functional vision of users for some tasks. In our study, both artificially impaired participants and participants with near-complete vision loss performed tasks that they could not do without the AR system. Current limitations in system performance and form factor, as well as the risk of overconfidence, will need to be overcome.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

  15. Visual Prostheses: The Enabling Technology to Give Sight to the Blind

    PubMed Central

    Maghami, Mohammad Hossein; Sodagar, Amir Masoud; Lashay, Alireza; Riazi-Esfahani, Hamid; Riazi-Esfahani, Mohammad

    2014-01-01

    Millions of patients are either slowly losing their vision or are already blind due to retinal degenerative diseases such as retinitis pigmentosa (RP) and age-related macular degeneration (AMD) or because of accidents or injuries. Employment of artificial means to treat extreme vision impairment has come closer to reality during the past few decades. Currently, many research groups work towards effective solutions to restore a rudimentary sense of vision to the blind. Aside from the efforts being put on replacing damaged parts of the retina by engineered living tissues or microfabricated photoreceptor arrays, implantable electronic microsystems, referred to as visual prostheses, are also sought as promising solutions to restore vision. From a functional point of view, visual prostheses receive image information from the outside world and deliver them to the natural visual system, enabling the subject to receive a meaningful perception of the image. This paper provides an overview of technical design aspects and clinical test results of visual prostheses, highlights past and recent progress in realizing chronic high-resolution visual implants as well as some technical challenges confronted when trying to enhance the functional quality of such devices. PMID:25709777

  16. Recent developments of artificial intelligence in drying of fresh food: A review.

    PubMed

    Sun, Qing; Zhang, Min; Mujumdar, Arun S

    2018-03-01

    Intellectualization is an important direction of drying development and artificial intelligence (AI) technologies have been widely used to solve problems of nonlinear function approximation, pattern detection, data interpretation, optimization, simulation, diagnosis, control, data sorting, clustering, and noise reduction in different food drying technologies due to the advantages of self-learning ability, adaptive ability, strong fault tolerance and high degree robustness to map the nonlinear structures of arbitrarily complex and dynamic phenomena. This article presents a comprehensive review on intelligent drying technologies and their applications. The paper starts with the introduction of basic theoretical knowledge of ANN, fuzzy logic and expert system. Then, we summarize the AI application of modeling, predicting, and optimization of heat and mass transfer, thermodynamic performance parameters, and quality indicators as well as physiochemical properties of dried products in artificial biomimetic technology (electronic nose, computer vision) and different conventional drying technologies. Furthermore, opportunities and limitations of AI technique in drying are also outlined to provide more ideas for researchers in this area.

  17. Parametric Study of Diffusion-Enhancement Networks for Spatiotemporal Grouping in Real-Time Artificial Vision

    DTIC Science & Technology

    1993-04-01

    suggesting it occurs in later visual motion processing (long-range or second-order system). STIMULUS PERCEPT L" FLASH DURATION FLASH DURATION (a) TIME ( b ...TIME Figure 2. Gamma motion. (a) A light of fixed spatial extent is illuminated then extim- guished. ( b ) The percept is of a light expanding and then...while smaller, type- B cells provide input to its parvocellular subdivision. From here the magnocellular pathway progresses up through visual cortex area V

  18. On-line dimensional measurement of small components on the eyeglasses assembly line

    NASA Astrophysics Data System (ADS)

    Rosati, G.; Boschetti, G.; Biondi, A.; Rossi, A.

    2009-03-01

    Dimensional measurement of the subassemblies at the beginning of the assembly line is a very crucial process for the eyeglasses industry, since even small manufacturing errors of the components can lead to very visible defects on the final product. For this reason, all subcomponents of the eyeglass are verified before beginning the assembly process either with a 100% inspection or on a statistical basis. Inspection is usually performed by human operators, with high costs and a degree of repeatability which is not always satisfactory. This paper presents a novel on-line measuring system for dimensional verification of small metallic subassemblies for the eyeglasses industry. The machine vision system proposed, which was designed to be used at the beginning of the assembly line, could also be employed in the Statistical Process Control (SPC) by the manufacturer of the subassemblies. The automated system proposed is based on artificial vision, and exploits two CCD cameras and an anthropomorphic robot to inspect and manipulate the subcomponents of the eyeglass. Each component is recognized by the first camera in a quite large workspace, picked up by the robot and placed in the small vision field of the second camera which performs the measurement process. Finally, the part is palletized by the robot. The system can be easily taught by the operator by simply placing the template object in the vision field of the measurement camera (for dimensional data acquisition) and hence by instructing the robot via the Teaching Control Pendant within the vision field of the first camera (for pick-up transformation acquisition). The major problem we dealt with is that the shape and dimensions of the subassemblies can vary in a quite wide range, but different positioning of the same component can look very similar one to another. For this reason, a specific shape recognition procedure was developed. In the paper, the whole system is presented together with first experimental lab results.

  19. Biologically inspired technologies using artificial muscles

    NASA Astrophysics Data System (ADS)

    Bar-Cohen, Yoseph

    2005-01-01

    After billions of years of evolution, nature developed inventions that work, which are appropriate for the intended tasks and that last. The evolution of nature led to the introduction of highly effective and power efficient biological mechanisms that are scalable from micron to many meters in size. Imitating these mechanisms offers enormous potentials for the improvement of our life and the tools we use. Humans have always made efforts to imitate nature and we are increasingly reaching levels of advancement where it becomes significantly easier to imitate, copy, and adapt biological methods, processes and systems. Some of the biomimetic technologies that have emerged include artificial muscles, artificial intelligence, and artificial vision to which significant advances in materials science, mechanics, electronics, and computer science have contributed greatly. One of the newest fields of biomimetics is the electroactive polymers (EAP) that are also known as artificial muscles. To take advantage of these materials, efforts are made worldwide to establish a strong infrastructure addressing the need for comprehensive analytical modeling of their operation mechanism and develop effective processing and characterization techniques. The field is still in its emerging state and robust materials are not readily available however in recent years significant progress has been made and commercial products have already started to appear. This paper covers the state-of-the-art and challenges to making artificial muscles and their potential biomimetic applications.

  20. The 1988 Goddard Conference on Space Applications of Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Rash, James (Editor); Hughes, Peter (Editor)

    1988-01-01

    This publication comprises the papers presented at the 1988 Goddard Conference on Space Applications of Artificial Intelligence held at the NASA/Goddard Space Flight Center, Greenbelt, Maryland on May 24, 1988. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in these proceedings fall into the following areas: mission operations support, planning and scheduling; fault isolation/diagnosis; image processing and machine vision; data management; modeling and simulation; and development tools/methodologies.

  1. Technical vision for robots

    NASA Astrophysics Data System (ADS)

    1985-01-01

    A new invention by scientists who have copied the structure of a human eye will help replace a human telescope-watching astronomer with a robot. It will be possible to provide technical vision not only for robot astronomers but also for their industrial fellow robots. So far, an artificial eye with dimensions close to those of a human eye discerns only black-and-white images. But already the second model of the eye is to perceive colors as well. Polymers which are suited for the role of the coat of an eye, lens, and vitreous body were applied. The retina has been replaced with a bundle of the finest glass filaments through which light rays get onto photomultipliers. They can be positioned outside the artificial eye. The main thing is to prevent great losses in the light guide.

  2. Selective attention in multi-chip address-event systems.

    PubMed

    Bartolozzi, Chiara; Indiveri, Giacomo

    2009-01-01

    Selective attention is the strategy used by biological systems to cope with the inherent limits in their available computational resources, in order to efficiently process sensory information. The same strategy can be used in artificial systems that have to process vast amounts of sensory data with limited resources. In this paper we present a neuromorphic VLSI device, the "Selective Attention Chip" (SAC), which can be used to implement these models in multi-chip address-event systems. We also describe a real-time sensory-motor system, which integrates the SAC with a dynamic vision sensor and a robotic actuator. We present experimental results from each component in the system, and demonstrate how the complete system implements a real-time stimulus-driven selective attention model.

  3. From Image Analysis to Computer Vision: Motives, Methods, and Milestones.

    DTIC Science & Technology

    1998-07-01

    images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision

  4. Event-driven visual attention for the humanoid robot iCub

    PubMed Central

    Rea, Francesco; Metta, Giorgio; Bartolozzi, Chiara

    2013-01-01

    Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend. PMID:24379753

  5. Big Data and Nursing: Implications for the Future.

    PubMed

    Topaz, Maxim; Pruinelli, Lisiane

    2017-01-01

    Big data is becoming increasingly more prevalent and it affects the way nurses learn, practice, conduct research and develop policy. The discipline of nursing needs to maximize the benefits of big data to advance the vision of promoting human health and wellbeing. However, current practicing nurses, educators and nurse scientists often lack the required skills and competencies necessary for meaningful use of big data. Some of the key skills for further development include the ability to mine narrative and structured data for new care or outcome patterns, effective data visualization techniques, and further integration of nursing sensitive data into artificial intelligence systems for better clinical decision support. We provide growth-path vision recommendations for big data competencies for practicing nurses, nurse educators, researchers, and policy makers to help prepare the next generation of nurses and improve patient outcomes trough better quality connected health.

  6. Close coupling of pre- and post-processing vision stations using inexact algorithms

    NASA Astrophysics Data System (ADS)

    Shih, Chi-Hsien V.; Sherkat, Nasser; Thomas, Peter D.

    1996-02-01

    Work has been reported using lasers to cut deformable materials. Although the use of laser reduces material deformation, distortion due to mechanical feed misalignment persists. Changes in the lace patten are also caused by the release of tension in the lace structure as it is cut. To tackle the problem of distortion due to material flexibility, the 2VMethod together with the Piecewise Error Compensation Algorithm incorporating the inexact algorithms, i.e., fuzzy logic, neural networks and neural fuzzy technique, are developed. A spring mounted pen is used to emulate the distortion of the lace pattern caused by tactile cutting and feed misalignment. Using pre- and post-processing vision systems, it is possible to monitor the scalloping process and generate on-line information for the artificial intelligence engines. This overcomes the problems of lace distortion due to the trimming process. Applying the algorithms developed, the system can produce excellent results, much better than a human operator.

  7. Vision and spectroscopic sensing for joint tracing in narrow gap laser butt welding

    NASA Astrophysics Data System (ADS)

    Nilsen, Morgan; Sikström, Fredrik; Christiansson, Anna-Karin; Ancona, Antonio

    2017-11-01

    The automated laser beam butt welding process is sensitive to positioning the laser beam with respect to the joint because a small offset may result in detrimental lack of sidewall fusion. This problem is even more pronounced in case of narrow gap butt welding, where most of the commercial automatic joint tracing systems fail to detect the exact position and size of the gap. In this work, a dual vision and spectroscopic sensing approach is proposed to trace narrow gap butt joints during laser welding. The system consists of a camera with suitable illumination and matched optical filters and a fast miniature spectrometer. An image processing algorithm of the camera recordings has been developed in order to estimate the laser spot position relative to the joint position. The spectral emissions from the laser induced plasma plume have been acquired by the spectrometer, and based on the measurements of the intensities of selected lines of the spectrum, the electron temperature signal has been calculated and correlated to variations of process conditions. The individual performances of these two systems have been experimentally investigated and evaluated offline by data from several welding experiments, where artificial abrupt as well as gradual deviations of the laser beam out of the joint were produced. Results indicate that a combination of the information provided by the vision and spectroscopic systems is beneficial for development of a hybrid sensing system for joint tracing.

  8. Perceptual learning in a non-human primate model of artificial vision

    PubMed Central

    Killian, Nathaniel J.; Vurro, Milena; Keith, Sarah B.; Kyada, Margee J.; Pezaris, John S.

    2016-01-01

    Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation. PMID:27874058

  9. Broiler weight estimation based on machine vision and artificial neural network.

    PubMed

    Amraei, S; Abdanan Mehdizadeh, S; Salari, S

    2017-04-01

    1. Machine vision and artificial neural network (ANN) procedures were used to estimate live body weight of broiler chickens in 30 1-d-old broiler chickens reared for 42 d. 2. Imaging was performed two times daily. To localise chickens within the pen, an ellipse fitting algorithm was used and the chickens' head and tail removed using the Chan-Vese method. 3. The correlations between the body weight and 6 physical extracted features indicated that there were strong correlations between body weight and the 5 features including area, perimeter, convex area, major and minor axis length. 5. According to statistical analysis there was no significant difference between morning and afternoon data over 42 d. 6. In an attempt to improve the accuracy of live weight approximation different ANN techniques, including Bayesian regulation, Levenberg-Marquardt, Scaled conjugate gradient and gradient descent were used. Bayesian regulation with R 2 value of 0.98 was the best network for prediction of broiler weight. 7. The accuracy of the machine vision technique was examined and most errors were less than 50 g.

  10. Design and control of an embedded vision guided robotic fish with multiple control surfaces.

    PubMed

    Yu, Junzhi; Wang, Kai; Tan, Min; Zhang, Jianwei

    2014-01-01

    This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface.

  11. Design and Control of an Embedded Vision Guided Robotic Fish with Multiple Control Surfaces

    PubMed Central

    Wang, Kai; Tan, Min; Zhang, Jianwei

    2014-01-01

    This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface. PMID:24688413

  12. Parallel Algorithms for Computer Vision.

    DTIC Science & Technology

    1989-01-01

    34 IEEE Tran. Pattern Ankyaij and Ma- Artifcial Intelligence , Tokyo, 1979. chine Intelligence , 6, 1984. Kirkpatrick, S., C.D. Gelatt, Jr. and M.P. Vecchi...MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB T P06010 JAN 89 ETL-0529 UNCLASSIFIED DACA76-85-C-0010 F.’G 12/1I N mommiimmmiiso...PoggioI Massachusetts Institute of Technology i Artificial Intelligence Laboratory 545 Technology Square Cambridge, Massachusetts 02139 DTIC January

  13. Cohort: critical science

    NASA Astrophysics Data System (ADS)

    Digney, Bruce L.

    2007-04-01

    Unmanned vehicle systems is an attractive technology for the military, but whose promises have remained largely undelivered. There currently exist fielded remote controlled UGVs and high altitude UAV whose benefits are based on standoff in low complexity environments with sufficiently low control reaction time requirements to allow for teleoperation. While effective within there limited operational niche such systems do not meet with the vision of future military UxV scenarios. Such scenarios envision unmanned vehicles operating effectively in complex environments and situations with high levels of independence and effective coordination with other machines and humans pursing high level, changing and sometimes conflicting goals. While these aims are clearly ambitious they do provide necessary targets and inspiration with hopes of fielding near term useful semi-autonomous unmanned systems. Autonomy involves many fields of research including machine vision, artificial intelligence, control theory, machine learning and distributed systems all of which are intertwined and have goals of creating more versatile broadly applicable algorithms. Cohort is a major Applied Research Program (ARP) led by Defence R&D Canada (DRDC) Suffield and its aim is to develop coordinated teams of unmanned vehicles (UxVs) for urban environments. This paper will discuss the critical science being addressed by DRDC developing semi-autonomous systems.

  14. The challenges of informatics in synthetic biology: from biomolecular networks to artificial organisms

    PubMed Central

    Ramoni, Marco F.

    2010-01-01

    The field of synthetic biology holds an inspiring vision for the future; it integrates computational analysis, biological data and the systems engineering paradigm in the design of new biological machines and systems. These biological machines are built from basic biomolecular components analogous to electrical devices, and the information flow among these components requires the augmentation of biological insight with the power of a formal approach to information management. Here we review the informatics challenges in synthetic biology along three dimensions: in silico, in vitro and in vivo. First, we describe state of the art of the in silico support of synthetic biology, from the specific data exchange formats, to the most popular software platforms and algorithms. Next, we cast in vitro synthetic biology in terms of information flow, and discuss genetic fidelity in DNA manipulation, development strategies of biological parts and the regulation of biomolecular networks. Finally, we explore how the engineering chassis can manipulate biological circuitries in vivo to give rise to future artificial organisms. PMID:19906839

  15. Comparison of objective optical quality measured by double-pass aberrometry in patients with moderate dry eye: Normal saline vs. artificial tears: A pilot study.

    PubMed

    Vandermeer, G; Chamy, Y; Pisella, P-J

    2018-02-01

    Dry eye is defined by a tear film instability resulting in variable but systematic fluctuations in quality of vision. Variability in optical quality can be demonstrated using a double pass aberrometer such as the Optical Quality Analyzing System, Visiometrics (OQAS). The goal of this work is to compare fluctuations in objective quality of vision measured by OQAS between treatment with normal saline eye drops and treatment with carmellose 0.5% and hyaluronic acid 0.1% (Optive Fusion [OF], Allergan) in patients with moderate dry eye syndrome. Optical quality was measured by evaluating the variations in the Optical Scattering Index (OSI) over 20seconds using the OQAS. Inclusion criteria were dry eye syndrome with an ocular surface disease index (OSDI) score >23 treated only with artificial tears. The patients were their own controls: OF in one eye and normal saline in the fellow eye. The choice of the subject eye and control eye was determined in a randomized fashion. OSI variations were measured in each eye before instillation, 5minutes and 2hours after instillation. The primary endpoint was OSI fluctuation over 20seconds of measurement. Secondary endpoints were the number of blinks and patient preference (preferred eye). Preliminary results were obtained on 19 patients. Average OSDI score was 36.8. Visual acuity was 10/10 with no significant difference between the two eyes. Prior to instillation, there was no significant difference between "normal saline" and "OF" eyes in terms of OSI, OSI variability or number of blinks. In the normal saline eye, there were no significant variations in mean OSI, OSI variability, OSI slope, or number of blinks. However, in the "OF" eye, there was a significant variation between initial and 2-hour OSI variability (0.363 versus 0.204, P<0.05), the average slope of OSI (0.04 versus 0.01, P<0.05) and the number of blinks (4.2 versus 2.8, P<0.05). Among the patients, 65% preferred the OF eye, 24% did not have a preference, and 11% preferred the normal saline eye. Objective quality of vision measured by OQAS is an interesting parameter for evaluating the effectiveness of a lacrimal substitute. The purpose of artificial tears is, among other things, to provide comfort and a reduction of dry eye symptoms such as poor quality of vision. This study demonstrates that 0.5% carmellose and 0.1% hyaluronic acid allowed better stabilization of the tear film and thus a significant improvement in the quality of vision compared to normal saline. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  16. [Comparison of objective optical quality measured by double-pass aberrometry in patients with moderate dry eye: Normal saline vs. artificial tears: A pilot study].

    PubMed

    Vandermeer, G; Chamy, Y; Pisella, P-J

    2018-03-01

    Dry eye is defined by a tear film instability resulting in variable but systematic fluctuations in the quality of vision. Variability in optical quality can be demonstrated using a double pass aberrometer such as the OQAS (Optical Quality Analyzing System, Visiometrics). The goal of this work is to compare fluctuations in objective quality of vision measured by OQAS between treatment with normal saline eye drops and treatment with carmellose 0.5% and hyaluronic acid 0.1% (Optive Fusion [OF], Allergan) in patients with moderate dry eye syndrome. Optical quality was measured by evaluating the variations in the Optical Scattering Index (OSI) over 20seconds using the OQAS. Inclusion criteria were dry eye syndrome with an Ocular Surface Disease Index (OSDI) score>23 treated only with artificial tears. The patients were their own controls: OF in one eye and normal saline in the fellow eye. The choice of the subject eye and control eye was determined in a randomized fashion. OSI variations were measured in each eye before instillation, 5minutes and 2hours after instillation. The primary endpoint was OSI fluctuation over 20seconds of measurement. Secondary endpoints were the number of blinks and patient's preference (preferred eye). Preliminary results were obtained on 19 patients. Average OSDI score was 36.8. Visual acuity was 10/10 with no significant difference between the two eyes. Prior to instillation, there was no significant difference between "normal saline" and "OF" eyes in terms of OSI, OSI variability or number of blinks. In the normal saline eye, there was no significant variation in mean OSI, OSI variability, OSI slope, or number of blinks. However, in the "OF" eye, there was a significant variation between initial and 2-hour OSI variability (0.363 versus 0.204; P<0.05), the average slope of OSI (0.04 versus 0.01; P<0.05) and the number of blinks (4.2 versus 2.8; P<0.05). Sixty-five percent of patients preferred the OF eye, 24% did not have a preference, and 11% preferred the normal saline eye. Objective quality of vision measured by OQAS is an interesting parameter for evaluating the effectiveness of a lacrimal substitute. The purpose of artificial tears is, among other things, to provide comfort and a reduction of dry eye symptoms such as poor quality of vision. This study demonstrates that 0.5% carmellose and 0.1% hyaluronic acid allowed better stabilization of the tear film and thus a significant improvement in the quality of vision compared to normal saline. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  17. Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people

    PubMed Central

    Aguilar, Carlos; Castet, Eric

    2017-01-01

    People with low vision, especially those with Central Field Loss (CFL), need magnification to read. The flexibility of Electronic Vision Enhancement Systems (EVES) offers several ways of magnifying text. Due to the restricted field of view of EVES, the need for magnification is conflicting with the need to navigate through text (panning). We have developed and implemented a real-time gaze-controlled system whose goal is to optimize the possibility of magnifying a portion of text while maintaining global viewing of the other portions of the text (condition 1). Two other conditions were implemented that mimicked commercially available advanced systems known as CCTV (closed-circuit television systems)—conditions 2 and 3. In these two conditions, magnification was uniformly applied to the whole text without any possibility to specifically select a region of interest. The three conditions were implemented on the same computer to remove differences that might have been induced by dissimilar equipment. A gaze-contingent artificial 10° scotoma (a mask continuously displayed in real time on the screen at the gaze location) was used in the three conditions in order to simulate macular degeneration. Ten healthy subjects with a gaze-contingent scotoma read aloud sentences from a French newspaper in nine experimental one-hour sessions. Reading speed was measured and constituted the main dependent variable to compare the three conditions. All subjects were able to use condition 1 and they found it slightly more comfortable to use than condition 2 (and similar to condition 3). Importantly, reading speed results did not show any significant difference between the three systems. In addition, learning curves were similar in the three conditions. This proof of concept study suggests that the principles underlying the gaze-controlled enhanced system might be further developed and fruitfully incorporated in different kinds of EVES for low vision reading. PMID:28380004

  18. Building brains for bodies

    NASA Technical Reports Server (NTRS)

    Brooks, Rodney Allen; Stein, Lynn Andrea

    1994-01-01

    We describe a project to capitalize on newly available levels of computational resources in order to understand human cognition. We will build an integrated physical system including vision, sound input and output, and dextrous manipulation, all controlled by a continuously operating large scale parallel MIMD computer. The resulting system will learn to 'think' by building on its bodily experiences to accomplish progressively more abstract tasks. Past experience suggests that in attempting to build such an integrated system we will have to fundamentally change the way artificial intelligence, cognitive science, linguistics, and philosophy think about the organization of intelligence. We expect to be able to better reconcile the theories that will be developed with current work in neuroscience.

  19. Camera systems in human motion analysis for biomedical applications

    NASA Astrophysics Data System (ADS)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  20. 1988 Goddard Conference on Space Applications of Artificial Intelligence, Greenbelt, MD, May 24, 1988, Proceedings

    NASA Technical Reports Server (NTRS)

    Rash, James L. (Editor)

    1988-01-01

    This publication comprises the papers presented at the 1988 Goddard Conference on Space Applications of Artificial Intelligence held at the NASA/Goddard Space Flight Center, Greenbelt, Maryland on May 24, 1988. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in these proceedings fall into the following areas: mission operations support, planning and scheduling; fault isolation/diagnosis; image processing and machine vision; data management; modeling and simulation; and development tools methodologies.

  1. Parametric dense stereovision implementation on a system-on chip (SoC).

    PubMed

    Gardel, Alfredo; Montejo, Pablo; García, Jorge; Bravo, Ignacio; Lázaro, José L

    2012-01-01

    This paper proposes a novel hardware implementation of a dense recovery of stereovision 3D measurements. Traditionally 3D stereo systems have imposed the maximum number of stereo correspondences, introducing a large restriction on artificial vision algorithms. The proposed system-on-chip (SoC) provides great performance and efficiency, with a scalable architecture available for many different situations, addressing real time processing of stereo image flow. Using double buffering techniques properly combined with pipelined processing, the use of reconfigurable hardware achieves a parametrisable SoC which gives the designer the opportunity to decide its right dimension and features. The proposed architecture does not need any external memory because the processing is done as image flow arrives. Our SoC provides 3D data directly without the storage of whole stereo images. Our goal is to obtain high processing speed while maintaining the accuracy of 3D data using minimum resources. Configurable parameters may be controlled by later/parallel stages of the vision algorithm executed on an embedded processor. Considering hardware FPGA clock of 100 MHz, image flows up to 50 frames per second (fps) of dense stereo maps of more than 30,000 depth points could be obtained considering 2 Mpix images, with a minimum initial latency. The implementation of computer vision algorithms on reconfigurable hardware, explicitly low level processing, opens up the prospect of its use in autonomous systems, and they can act as a coprocessor to reconstruct 3D images with high density information in real time.

  2. Attenuating Stereo Pixel-Locking via Affine Window Adaptation

    NASA Technical Reports Server (NTRS)

    Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.

    2006-01-01

    For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.

  3. Vision Algorithm for the Solar Aspect System of the HEROES Mission

    NASA Technical Reports Server (NTRS)

    Cramer, Alexander

    2014-01-01

    This work covers the design and test of a machine vision algorithm for generating high-accuracy pitch and yaw pointing solutions relative to the sun for the High Energy Replicated Optics to Explore the Sun (HEROES) mission. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small fiducial markers. Images of this plate were processed in real time to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, identification with an ad-hoc method based on the spacing between fiducials, and image registration with a simple least squares fit. Performance is verified on a combination of artificially generated images, test data recorded on the ground, and images from the 2013 flight

  4. Vision Algorithm for the Solar Aspect System of the HEROES Mission

    NASA Technical Reports Server (NTRS)

    Cramer, Alexander; Christe, Steven; Shih, Albert

    2014-01-01

    This work covers the design and test of a machine vision algorithm for generating high-accuracy pitch and yaw pointing solutions relative to the sun for the High Energy Replicated Optics to Explore the Sun (HEROES) mission. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small fiducial markers. Images of this plate were processed in real time to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an Average Intersection method, fiducial detection by a matched filter approach, identification with an ad-hoc method based on the spacing between fiducials, and image registration with a simple least squares fit. Performance is verified on a combination of artificially generated images, test data recorded on the ground, and images from the 2013 flight.

  5. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision.

    PubMed

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-22

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on-off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  6. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision

    NASA Astrophysics Data System (ADS)

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-01

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on–off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  7. Artificial photosynthesis for solar fuels.

    PubMed

    Styring, Stenbjörn

    2012-01-01

    This contribution was presented as the closing lecture at the Faraday Discussion 155 on artificial photosynthesis, held in Edinburgh Scotland, September 5-7 2011. The world needs new, environmentally friendly and renewable fuels to exchange for fossil fuels. The fuel must be made from cheap and "endless" resources that are available everywhere. The new research area of solar fuels aims to meet this demand. This paper discusses why we need a solar fuel and why electricity is not enough; it proposes solar energy as the major renewable energy source to feed from. The scientific field concerning artificial photosynthesis expands rapidly and most of the different scientific visions for solar fuels are briefly overviewed. Research strategies and the development of artificial photosynthesis research to produce solar fuels are overviewed. Some conceptual aspects of research for artificial photosynthesis are discussed in closer detail.

  8. Miniature curved artificial compound eyes

    PubMed Central

    Floreano, Dario; Pericet-Camara, Ramon; Viollet, Stéphane; Ruffier, Franck; Brückner, Andreas; Leitel, Robert; Buss, Wolfgang; Menouni, Mohsine; Expert, Fabien; Juston, Raphaël; Dobrzynski, Michal Karol; L’Eplattenier, Geraud; Recktenwald, Fabian; Mallot, Hanspeter A.; Franceschini, Nicolas

    2013-01-01

    In most animal species, vision is mediated by compound eyes, which offer lower resolution than vertebrate single-lens eyes, but significantly larger fields of view with negligible distortion and spherical aberration, as well as high temporal resolution in a tiny package. Compound eyes are ideally suited for fast panoramic motion perception. Engineering a miniature artificial compound eye is challenging because it requires accurate alignment of photoreceptive and optical components on a curved surface. Here, we describe a unique design method for biomimetic compound eyes featuring a panoramic, undistorted field of view in a very thin package. The design consists of three planar layers of separately produced arrays, namely, a microlens array, a neuromorphic photodetector array, and a flexible printed circuit board that are stacked, cut, and curved to produce a mechanically flexible imager. Following this method, we have prototyped and characterized an artificial compound eye bearing a hemispherical field of view with embedded and programmable low-power signal processing, high temporal resolution, and local adaptation to illumination. The prototyped artificial compound eye possesses several characteristics similar to the eye of the fruit fly Drosophila and other arthropod species. This design method opens up additional vistas for a broad range of applications in which wide field motion detection is at a premium, such as collision-free navigation of terrestrial and aerospace vehicles, and for the experimental testing of insect vision theories. PMID:23690574

  9. Prevent Eye Damage: Protect Yourself from UV Radiation

    MedlinePlus

    ... vision. ® Snow Blindness (Photokeratitis): A temporary but painful burn to the cornea caused by a day at the beach without sunglasses; reflections off of snow, water, or concrete; or exposure to artificial light sources such as ...

  10. The potential of computer vision, optical backscattering parameters and artificial neural network modelling in monitoring the shrinkage of sweet potato (Ipomoea batatas L.) during drying.

    PubMed

    Onwude, Daniel I; Hashim, Norhashila; Abdan, Khalina; Janius, Rimfiel; Chen, Guangnan

    2018-03-01

    Drying is a method used to preserve agricultural crops. During the drying of products with high moisture content, structural changes in shape, volume, area, density and porosity occur. These changes could affect the final quality of dried product and also the effective design of drying equipment. Therefore, this study investigated a novel approach in monitoring and predicting the shrinkage of sweet potato during drying. Drying experiments were conducted at temperatures of 50-70 °C and samples thicknesses of 2-6 mm. The volume and surface area obtained from camera vision, and the perimeter and illuminated area from backscattered optical images were analysed and used to evaluate the shrinkage of sweet potato during drying. The relationship between dimensionless moisture content and shrinkage of sweet potato in terms of volume, surface area, perimeter and illuminated area was found to be linearly correlated. The results also demonstrated that the shrinkage of sweet potato based on computer vision and backscattered optical parameters is affected by the product thickness, drying temperature and drying time. A multilayer perceptron (MLP) artificial neural network with input layer containing three cells, two hidden layers (18 neurons), and five cells for output layer, was used to develop a model that can monitor, control and predict the shrinkage parameters and moisture content of sweet potato slices under different drying conditions. The developed ANN model satisfactorily predicted the shrinkage and dimensionless moisture content of sweet potato with correlation coefficient greater than 0.95. Combined computer vision, laser light backscattering imaging and artificial neural network can be used as a non-destructive, rapid and easily adaptable technique for in-line monitoring, predicting and controlling the shrinkage and moisture changes of food and agricultural crops during drying. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  11. Electrophysiological studies of the feasibility of suprachoroidal-transretinal stimulation for artificial vision in normal and RCS rats.

    PubMed

    Kanda, Hiroyuki; Morimoto, Takeshi; Fujikado, Takashi; Tano, Yasuo; Fukuda, Yutaka; Sawai, Hajime

    2004-02-01

    Assessment of a novel method of retinal stimulation, known as suprachoroidal-transretinal stimulation (STS), which was designed to minimize insult to the retina by implantation of stimulating electrodes for artificial vision. In 17 normal hooded rats and 12 Royal College of Surgeons (RCS) rats, a small area of the retina was focally stimulated with electric currents through an anode placed on the fenestrated sclera and a cathode inserted into the vitreous chamber. Evoked potentials (EPs) in response to STS were recorded from the surface of the superior colliculus (SC) with a silver-ball electrode, and their physiological properties and localization were studied. In both normal and RCS rats, STS elicited triphasic EPs that were vastly diminished by changing polarity of stimulating electrodes and abolished by transecting the optic nerve. The threshold intensity (C) of the EP response to STS was approximately 7.2 +/- 2.8 nC in normal and 12.9 +/- 7.7 nC in RCS rats. The responses to minimal STS were localized in an area on the SC surface measuring 0.12 +/- 0.07 mm(2) in normal rats and 0.24 +/- 0.12 mm(2) in RCS rats. The responsive area corresponded retinotopically to the retinal region immediately beneath the anodic stimulating electrode. STS is less invasive in the retina than stimulation through epiretinal or subretinal implants. STS can generate focal excitation in retinal ganglion cells in normal animals and in those with degenerated photoreceptors, which suggests that this method of retinal stimulation is suitable for artificial vision.

  12. Metal surface corrosion grade estimation from single image

    NASA Astrophysics Data System (ADS)

    Chen, Yijun; Qi, Lin; Sun, Huyuan; Fan, Hao; Dong, Junyu

    2018-04-01

    Metal corrosion can cause many problems, how to quickly and effectively assess the grade of metal corrosion and timely remediation is a very important issue. Typically, this is done by trained surveyors at great cost. Assisting them in the inspection process by computer vision and artificial intelligence would decrease the inspection cost. In this paper, we propose a dataset of metal surface correction used for computer vision detection and present a comparison between standard computer vision techniques by using OpenCV and deep learning method for automatic metal surface corrosion grade estimation from single image on this dataset. The test has been performed by classifying images and calculating the accuracy for the two different approaches.

  13. Perception for mobile robot navigation: A survey of the state of the art

    NASA Technical Reports Server (NTRS)

    Kortenkamp, David

    1994-01-01

    In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.

  14. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  15. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  16. The JPL/KSC telerobotic inspection demonstration

    NASA Technical Reports Server (NTRS)

    Mittman, David; Bon, Bruce; Collins, Carol; Fleischer, Gerry; Litwin, Todd; Morrison, Jack; Omeara, Jacquie; Peters, Stephen; Brogdon, John; Humeniuk, Bob

    1990-01-01

    An ASEA IRB90 robotic manipulator with attached inspection cameras was moved through a Space Shuttle Payload Assist Module (PAM) Cradle under computer control. The Operator and Operator Control Station, including graphics simulation, gross-motion spatial planning, and machine vision processing, were located at JPL. The Safety and Support personnel, PAM Cradle, IRB90, and image acquisition system, were stationed at the Kennedy Space Center (KSC). Images captured at KSC were used both for processing by a machine vision system at JPL, and for inspection by the JPL Operator. The system found collision-free paths through the PAM Cradle, demonstrated accurate knowledge of the location of both objects of interest and obstacles, and operated with a communication delay of two seconds. Safe operation of the IRB90 near Shuttle flight hardware was obtained both through the use of a gross-motion spatial planner developed at JPL using artificial intelligence techniques, and infrared beams and pressure sensitive strips mounted to the critical surfaces of the flight hardward at KSC. The Demonstration showed that telerobotics is effective for real tasks, safe for personnel and hardware, and highly productive and reliable for Shuttle payload operations and Space Station external operations.

  17. Fifth Conference on Artificial Intelligence for Space Applications

    NASA Technical Reports Server (NTRS)

    Odell, Steve L. (Compiler)

    1990-01-01

    The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration.

  18. Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis

    NASA Astrophysics Data System (ADS)

    Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario

    2015-12-01

    Objective. Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. Approach. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. Main results. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. Significance. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.

  19. Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis.

    PubMed

    Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario

    2015-12-01

    Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.

  20. A nonlinear contextually aware prompting system (N-CAPS) to assist workers with intellectual and developmental disabilities to perform factory assembly tasks: system overview and pilot testing.

    PubMed

    Mihailidis, A; Melonis, M; Keyfitz, R; Lanning, M; Van Vuuren, S; Bodine, C

    2016-10-01

    This paper presents a new cognitive assistive technology, nonlinear contextually aware prompting system (N-CAPS) that uses advanced sensing and artificial intelligence to monitor and provide assistance to workers with cognitive disabilities during a factory assembly task. The N-CAPS system was designed through the application of various computer vision and artificial intelligence algorithms that allows the system to track a user during a specific assembly task, and then provide verbal and visual prompts to the worker as needed. A pilot study was completed with the N-CAPS solution in order to investigate whether it was an appropriate intervention. Four participants completed the required assembly task five different times, using the N-CAPS system. The participants completed all of the trials that they attempted with 85.7% of the steps completed without assistance from the job coach. Of the 85.7% of steps completed independently, 32.5% of these were completed in response to prompts given by N-CAPS. Overall system accuracy was 83.3%, the overall sensitivity was 86.2% and the overall specificity was 82.4%. The results from the study were positive in that they showed that this type of technology does have merit with this population. Implications for Rehabilitation It provides a concise summary of the importance of work in the lives of people with intellectual disabilities and how technology can support this life goal. It describes the first artificially intelligent system designed to support workers with intellectually disabilities. It provides evidence that individuals with intellectual disabilities can perform a work task in response to technology.

  1. Automatic Mexican sign language and digits recognition using normalized central moments

    NASA Astrophysics Data System (ADS)

    Solís, Francisco; Martínez, David; Espinosa, Oscar; Toxqui, Carina

    2016-09-01

    This work presents a framework for automatic Mexican sign language and digits recognition based on computer vision system using normalized central moments and artificial neural networks. Images are captured by digital IP camera, four LED reflectors and a green background in order to reduce computational costs and prevent the use of special gloves. 42 normalized central moments are computed per frame and used in a Multi-Layer Perceptron to recognize each database. Four versions per sign and digit were used in training phase. 93% and 95% of recognition rates were achieved for Mexican sign language and digits respectively.

  2. Perceptual drifts of real and artificial limbs in the rubber hand illusion.

    PubMed

    Fuchs, Xaver; Riemer, Martin; Diers, Martin; Flor, Herta; Trojan, Jörg

    2016-04-22

    In the rubber hand illusion (RHI), transient embodiment of an artificial hand is induced. An often-used indicator for this effect is the "proprioceptive drift", a localization bias of the real hand towards the artificial hand. This measure suggests that the real hand is attracted by the artificial hand. Principles of multisensory integration, however, rather suggest that conflicting sensory information is combined in a "compromise" fashion and that hands should rather be attracted towards each other. Here, we used a new variant of the RHI paradigm in which participants pointed at the artificial hand. Our results indicate that the perceived positions of the real and artificial hand converge towards each other: in addition to the well-known drift of the real hand towards the artificial hand, we also found an opposite drift of the artificial hand towards the real hand. Our results contradict the notion of perceptual substitution of the real hand by the artificial hand. Rather, they are in line with the view that vision and proprioception are fused into an intermediate percept. This is further evidence that the perception of our body is a flexible multisensory construction that is based on integration principles.

  3. Intelligent Vision On The SM9O Mini-Computer Basis And Applications

    NASA Astrophysics Data System (ADS)

    Hawryszkiw, J.

    1985-02-01

    Distinction has to be made between image processing and vision Image processing finds its roots in the strong tradition of linear signal processing and promotes geometrical transform techniques, such as fi I tering , compression, and restoration. Its purpose is to transform an image for a human observer to easily extract from that image information significant for him. For example edges after a gradient operator, or a specific direction after a directional filtering operation. Image processing consists in fact in a set of local or global space-time transforms. The interpretation of the final image is done by the human observer. The purpose of vision is to extract the semantic content of the image. The machine can then understand that content, and run a process of decision, which turns into an action. Thus, intel I i gent vision depends on - Image processing - Pattern recognition - Artificial intel I igence

  4. Quantum vision in three dimensions

    NASA Astrophysics Data System (ADS)

    Roth, Yehuda

    We present four models for describing a 3-D vision. Similar to the mirror scenario, our models allow 3-D vision with no need for additional accessories such as stereoscopic glasses or a hologram film. These four models are based on brain interpretation rather than pure objective encryption. We consider the observer "subjective" selection of a measuring device and the corresponding quantum collapse into one of his selected states, as a tool for interpreting reality in according to the observer concepts. This is the basic concept of our study and it is introduced in the first model. Other models suggests "soften" versions that might be much easier to implement. Our quantum interpretation approach contribute to the following fields. In technology the proposed models can be implemented into real devices, allowing 3-D vision without additional accessories. Artificial intelligence: In the desire to create a machine that exchange information by using human terminologies, our interpretation approach seems to be appropriate.

  5. Ultraviolet-Blocking Lenses Protect, Enhance Vision

    NASA Technical Reports Server (NTRS)

    2010-01-01

    To combat the harmful properties of light in space, as well as that of artificial radiation produced during laser and welding work, Jet Propulsion Laboratory (JPL) scientists developed a lens capable of absorbing, filtering, and scattering the dangerous light while not obstructing vision. SunTiger Inc. now Eagle Eyes Optics, of Calabasas, California was formed to market a full line of sunglasses based on the JPL discovery that promised 100-percent elimination of harmful wavelengths and enhanced visual clarity. The technology was recently inducted into the Space Technology Hall of Fame.

  6. Aquatic Toxic Analysis by Monitoring Fish Behavior Using Computer Vision: A Recent Progress

    PubMed Central

    Fu, Longwen; Liu, Zuoyi

    2018-01-01

    Video tracking based biological early warning system achieved a great progress with advanced computer vision and machine learning methods. Ability of video tracking of multiple biological organisms has been largely improved in recent years. Video based behavioral monitoring has become a common tool for acquiring quantified behavioral data for aquatic risk assessment. Investigation of behavioral responses under chemical and environmental stress has been boosted by rapidly developed machine learning and artificial intelligence. In this paper, we introduce the fundamental of video tracking and present the pioneer works in precise tracking of a group of individuals in 2D and 3D space. Technical and practical issues suffered in video tracking are explained. Subsequently, the toxic analysis based on fish behavioral data is summarized. Frequently used computational methods and machine learning are explained with their applications in aquatic toxicity detection and abnormal pattern analysis. Finally, advantages of recent developed deep learning approach in toxic prediction are presented. PMID:29849612

  7. Neuro-inspired smart image sensor: analog Hmax implementation

    NASA Astrophysics Data System (ADS)

    Paindavoine, Michel; Dubois, Jérôme; Musa, Purnawarman

    2015-03-01

    Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.

  8. Vision Algorithm for the Solar Aspect System of the High Energy Replicated Optics to Explore the Sun Mission

    NASA Technical Reports Server (NTRS)

    Cramer, Alexander Krishnan

    2014-01-01

    This work covers the design and test of a machine vision algorithm for generating high- accuracy pitch and yaw pointing solutions relative to the sun on a high altitude balloon. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small cross-shaped fiducial markers. Images of this plate taken with an off-the-shelf camera were processed to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, and identification with an ad-hoc method based on the spacing between fiducials. Performance is verified on real test data where possible, but otherwise uses artificially generated data. Pointing knowledge is ultimately verified to meet the 20 arcsecond requirement.

  9. Large-scale functional models of visual cortex for remote sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simplemore » region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.« less

  10. Fish swarm intelligent to optimize real time monitoring of chips drying using machine vision

    NASA Astrophysics Data System (ADS)

    Hendrawan, Y.; Hawa, L. C.; Damayanti, R.

    2018-03-01

    This study attempted to apply machine vision-based chips drying monitoring system which is able to optimise the drying process of cassava chips. The objective of this study is to propose fish swarm intelligent (FSI) optimization algorithms to find the most significant set of image features suitable for predicting water content of cassava chips during drying process using artificial neural network model (ANN). Feature selection entails choosing the feature subset that maximizes the prediction accuracy of ANN. Multi-Objective Optimization (MOO) was used in this study which consisted of prediction accuracy maximization and feature-subset size minimization. The results showed that the best feature subset i.e. grey mean, L(Lab) Mean, a(Lab) energy, red entropy, hue contrast, and grey homogeneity. The best feature subset has been tested successfully in ANN model to describe the relationship between image features and water content of cassava chips during drying process with R2 of real and predicted data was equal to 0.9.

  11. Systems engineering at the nanoscale

    NASA Astrophysics Data System (ADS)

    Benkoski, Jason J.; Breidenich, Jennifer L.; Wei, Michael C.; Clatterbaughi, Guy V.; Keng, Pei Yuin; Pyun, Jeffrey

    2012-06-01

    Nanomaterials have provided some of the greatest leaps in technology over the past twenty years, but their relatively early stage of maturity presents challenges for their incorporation into engineered systems. Perhaps even more challenging is the fact that the underlying physics at the nanoscale often run counter to our physical intuition. The current state of nanotechnology today includes nanoscale materials and devices developed to function as components of systems, as well as theoretical visions for "nanosystems," which are systems in which all components are based on nanotechnology. Although examples will be given to show that nanomaterials have indeed matured into applications in medical, space, and military systems, no complete nanosystem has yet been realized. This discussion will therefore focus on systems in which nanotechnology plays a central role. Using self-assembled magnetic artificial cilia as an example, we will discuss how systems engineering concepts apply to nanotechnology.

  12. The role of visual deprivation and experience on the performance of sensory substitution devices.

    PubMed

    Stronks, H Christiaan; Nau, Amy C; Ibbotson, Michael R; Barnes, Nick

    2015-10-22

    It is commonly accepted that the blind can partially compensate for their loss of vision by developing enhanced abilities with their remaining senses. This visual compensation may be related to the fact that blind people rely on their other senses in everyday life. Many studies have indeed shown that experience plays an important role in visual compensation. Numerous neuroimaging studies have shown that the visual cortices of the blind are recruited by other functional brain areas and can become responsive to tactile or auditory input instead. These cross-modal plastic changes are more pronounced in the early blind compared to late blind individuals. The functional consequences of cross-modal plasticity on visual compensation in the blind are debated, as are the influences of various etiologies of vision loss (i.e., blindness acquired early or late in life). Distinguishing between the influences of experience and visual deprivation on compensation is especially relevant for rehabilitation of the blind with sensory substitution devices. The BrainPort artificial vision device and The vOICe are assistive devices for the blind that redirect visual information to another intact sensory system. Establishing how experience and different etiologies of vision loss affect the performance of these devices may help to improve existing rehabilitation strategies, formulate effective selection criteria and develop prognostic measures. In this review we will discuss studies that investigated the influence of training and visual deprivation on the performance of various sensory substitution approaches. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. The lawful imprecision of human surface tilt estimation in natural scenes

    PubMed Central

    2018-01-01

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. PMID:29384477

  14. The lawful imprecision of human surface tilt estimation in natural scenes.

    PubMed

    Kim, Seha; Burge, Johannes

    2018-01-31

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. © 2018, Kim et al.

  15. Visual-conformal display format for helicopter guidance

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Schmerwitz, Sven; Lueken, Thomas

    2014-06-01

    Helicopter guidance in situations where natural vision is reduced is still a challenging task. Beside new available sensors, which are able to "see" through darkness, fog and dust, display technology remains one of the key issues of pilot assistance systems. As long as we have pilots within aircraft cockpits, we have to keep them informed about the outside situation. "Situational awareness" of humans is mainly powered by their visual channel. Therefore, display systems which are able to cross-fade seamless from natural vision to artificial computer vision and vice versa, are of greatest interest within this context. Helmet-mounted displays (HMD) have this property when they apply a head-tracker for measuring the pilot's head orientation relative to the aircraft reference frame. Together with the aircraft's position and orientation relative to the world's reference frame, the on-board graphics computer can generate images which are perfectly aligned with the outside world. We call image elements which match the outside world, "visual-conformal". Published display formats for helicopter guidance in degraded visual environment apply mostly 2D-symbologies which stay far behind from what is possible. We propose a perspective 3D-symbology for a head-tracked HMD which shows as much as possible visual-conformal elements. We implemented and tested our proposal within our fixed based cockpit simulator as well as in our flying helicopter simulator (FHS). Recently conducted simulation trials with experienced helicopter pilots give some first evaluation results of our proposal.

  16. Neural architectures for robot intelligence.

    PubMed

    Ritter, H; Steil, J J; Nölker, C; Röthling, F; McGuire, P

    2003-01-01

    We argue that direct experimental approaches to elucidate the architecture of higher brains may benefit from insights gained from exploring the possibilities and limits of artificial control architectures for robot systems. We present some of our recent work that has been motivated by that view and that is centered around the study of various aspects of hand actions since these are intimately linked with many higher cognitive abilities. As examples, we report on the development of a modular system for the recognition of continuous hand postures based on neural nets, the use of vision and tactile sensing for guiding prehensile movements of a multifingered hand, and the recognition and use of hand gestures for robot teaching. Regarding the issue of learning, we propose to view real-world learning from the perspective of data-mining and to focus more strongly on the imitation of observed actions instead of purely reinforcement-based exploration. As a concrete example of such an effort we report on the status of an ongoing project in our laboratory in which a robot equipped with an attention system with a neurally inspired architecture is taught actions by using hand gestures in conjunction with speech commands. We point out some of the lessons learnt from this system, and discuss how systems of this kind can contribute to the study of issues at the junction between natural and artificial cognitive systems.

  17. Innovative intelligent technology of distance learning for visually impaired people

    NASA Astrophysics Data System (ADS)

    Samigulina, Galina; Shayakhmetova, Assem; Nuysuppov, Adlet

    2017-12-01

    The aim of the study is to develop innovative intelligent technology and information systems of distance education for people with impaired vision (PIV). To solve this problem a comprehensive approach has been proposed, which consists in the aggregate of the application of artificial intelligence methods and statistical analysis. Creating an accessible learning environment, identifying the intellectual, physiological, psychophysiological characteristics of perception and information awareness by this category of people is based on cognitive approach. On the basis of fuzzy logic the individually-oriented learning path of PIV is con- structed with the aim of obtaining high-quality engineering education with modern equipment in the joint use laboratories.

  18. Semi-autonomous unmanned ground vehicle control system

    NASA Astrophysics Data System (ADS)

    Anderson, Jonathan; Lee, Dah-Jye; Schoenberger, Robert; Wei, Zhaoyi; Archibald, James

    2006-05-01

    Unmanned Ground Vehicles (UGVs) have advantages over people in a number of different applications, ranging from sentry duty, scouting hazardous areas, convoying goods and supplies over long distances, and exploring caves and tunnels. Despite recent advances in electronics, vision, artificial intelligence, and control technologies, fully autonomous UGVs are still far from being a reality. Currently, most UGVs are fielded using tele-operation with a human in the control loop. Using tele-operations, a user controls the UGV from the relative safety and comfort of a control station and sends commands to the UGV remotely. It is difficult for the user to issue higher level commands such as patrol this corridor or move to this position while avoiding obstacles. As computer vision algorithms are implemented in hardware, the UGV can easily become partially autonomous. As Field Programmable Gate Arrays (FPGAs) become larger and more powerful, vision algorithms can run at frame rate. With the rapid development of CMOS imagers for consumer electronics, frame rate can reach as high as 200 frames per second with a small size of the region of interest. This increase in the speed of vision algorithm processing allows the UGVs to become more autonomous, as they are able to recognize and avoid obstacles in their path, track targets, or move to a recognized area. The user is able to focus on giving broad supervisory commands and goals to the UGVs, allowing the user to control multiple UGVs at once while still maintaining the convenience of working from a central base station. In this paper, we will describe a novel control system for the control of semi-autonomous UGVs. This control system combines a user interface similar to a simple tele-operation station along with a control package, including the FPGA and multiple cameras. The control package interfaces with the UGV and provides the necessary control to guide the UGV.

  19. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  20. 77 FR 16890 - Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-22

    ... Committee 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April 17-19...

  1. 78 FR 5557 - Twenty-First Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-25

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held...

  2. 77 FR 56254 - Twentieth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-12

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October 2-4...

  3. STRUTEX: A prototype knowledge-based system for initially configuring a structure to support point loads in two dimensions

    NASA Technical Reports Server (NTRS)

    Robers, James L.; Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    Only recently have engineers begun making use of Artificial Intelligence (AI) tools in the area of conceptual design. To continue filling this void in the design process, a prototype knowledge-based system, called STRUTEX has been developed to initially configure a structure to support point loads in two dimensions. This prototype was developed for testing the application of AI tools to conceptual design as opposed to being a testbed for new methods for improving structural analysis and optimization. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user. How the system is constructed to interact with the user is described. Of special interest is the information flow between the knowledge base and the data base under control of the algorithmic main program. Examples of computed and refined structures are presented during the explanation of the system.

  4. Using artificial intelligence to bring evidence-based medicine a step closer to making the individual difference.

    PubMed

    Sissons, B; Gray, W A; Bater, A; Morrey, D

    2007-03-01

    The vision of evidence-based medicine is that of experienced clinicians systematically using the best research evidence to meet the individual patient's needs. This vision remains distant from clinical reality, as no complete methodology exists to apply objective, population-based research evidence to the needs of an individual real-world patient. We describe an approach, based on techniques from machine learning, to bridge this gap between evidence and individual patients in oncology. We examine existing proposals for tackling this gap and the relative benefits and challenges of our proposed, k-nearest-neighbour-based, approach.

  5. Lighting in Schools.

    ERIC Educational Resources Information Center

    Department of Education and Science, London (England).

    The application of good lighting principles to school design is discussed. Part 1 of the study is concerned with the general principles of light and vision as they affect lighting in schools. Parts 2 and 3 deal with the application of these principles to daylighting and artificial lighting. Part 4 discusses the circumstances in which the…

  6. Lethal RPAs: Ethical Implications of Future Airpower Technology

    DTIC Science & Technology

    2013-04-01

    enhancements include cochlear implants, artificial vision, and bionic body parts. Significantly, this is also one of the Air Force goals as stated in...need courage and the warrior ethos in order to lead others into battle. Holding this capability in high esteem and ensuring more cross flow between

  7. Early vision and focal attention

    NASA Astrophysics Data System (ADS)

    Julesz, Bela

    1991-07-01

    At the thirty-year anniversary of the introduction of the technique of computer-generated random-dot stereograms and random-dot cinematograms into psychology, the impact of the technique on brain research and on the study of artificial intelligence is reviewed. The main finding-that stereoscopic depth perception (stereopsis), motion perception, and preattentive texture discrimination are basically bottom-up processes, which occur without the help of the top-down processes of cognition and semantic memory-greatly simplifies the study of these processes of early vision and permits the linking of human perception with monkey neurophysiology. Particularly interesting are the unexpected findings that stereopsis (assumed to be local) is a global process, while texture discrimination (assumed to be a global process, governed by statistics) is local, based on some conspicuous local features (textons). It is shown that the top-down process of "shape (depth) from shading" does not affect stereopsis, and some of the models of machine vision are evaluated. The asymmetry effect of human texture discrimination is discussed, together with recent nonlinear spatial filter models and a novel extension of the texton theory that can cope with the asymmetry problem. This didactic review attempts to introduce the physicist to the field of psychobiology and its problems-including metascientific problems of brain research, problems of scientific creativity, the state of artificial intelligence research (including connectionist neural networks) aimed at modeling brain activity, and the fundamental role of focal attention in mental events.

  8. Milestones on the road to independence for the blind

    NASA Astrophysics Data System (ADS)

    Reed, Kenneth

    1997-02-01

    Ken will talk about his experiences as an end user of technology. Even moderate technological progress in the field of pattern recognition and artificial intelligence can be, often surprisingly, of great help to the blind. An example is the providing of portable bar code scanners so that a blind person knows what he is buying and what color it is. In this age of microprocessors controlling everything, how can a blind person find out what his VCR is doing? Is there some technique that will allow a blind musician to convert print music into midi files to drive a synthesizer? Can computer vision help the blind cross a road including predictions of where oncoming traffic will be located? Can computer vision technology provide spoken description of scenes so a blind person can figure out where doors and entrances are located, and what the signage on the building says? He asks 'can computer vision help me flip a pancake?' His challenge to those in the computer vision field is 'where can we go from here?'

  9. Beyond reductionism: metabolic circularity as a guiding vision for a real biology of systems.

    PubMed

    Cornish-Bowden, Athel; Cárdenas, María Luz; Letelier, Juan-Carlos; Soto-Andrade, Jorge

    2007-03-01

    The definition of life has excited little interest among molecular biologists during the past half-century, and the enormous development in biology during that time has been largely based on an analytical approach in which all biological entities are studied in terms of their components, the process being extended to greater and greater detail without limit. The benefits of this reductionism are so obvious that they need no discussion, but there have been costs as well, and future advances, for example, for creating artificial life or for taking biotechnology beyond the level of tinkering, will need more serious attention to be given to the question of what makes a living organism living. According to Robert Rosen's theory of metabolism-replacement systems, the central idea missing from molecular biology is that of metabolic circularity, most evident from the obvious but commonly ignored fact that proteins are not given from outside but are products of metabolism, and thus metabolites. Among other consequences, this implies that the usual distinction between proteome and metabolome is conceptually artificial -- however useful it may be in practice -- as the proteome is part of the metabolome.

  10. 78 FR 16756 - Twenty-Second Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...

  11. 78 FR 55774 - Twenty Fourth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-11

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October...

  12. 75 FR 17202 - Eighth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-05

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...

  13. 75 FR 44306 - Eleventh Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-28

    ... Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The...

  14. 75 FR 71183 - Twelfth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY...: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will...

  15. Satellite markers: a simple method for ground truth car pose on stereo video

    NASA Astrophysics Data System (ADS)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  16. Artificial Gravity as a Multi-System Countermeasure for Exploration Class Space Flight Missions

    NASA Technical Reports Server (NTRS)

    Paloski, William H.; Dawson, David L. (Technical Monitor)

    2000-01-01

    NASA's vision for space exploration includes missions of unprecedented distance and duration. However, during 30 years of human space flight experience, including numerous long-duration missions, research has not produced any single countermeasure or combination of countermeasures that is completely effective. Current countermeasures do not fully protect crews in low-Earth orbit, and certainly will not be appropriate for crews journeying to Mars and back over a three-year period. The urgency for exploration-class countermeasures is compounded by continued technical and scientific successes that make exploration class missions increasingly attractive. The critical and possibly fatal problems of bone loss, cardiovascular deconditioning, muscle weakening, neurovestibular disturbance, space anemia, and immune compromise may be alleviated by the appropriate application of artificial gravity (AG). However, despite a manifest need for new countermeasure approaches, concepts for applying AG as a countermeasure have not developed apace. To explore the utility of AG as a multi-system countermeasure during long-duration, exploration-class space flight, eighty-three members of the international space life science and space flight community met earlier this year. They concluded unanimously that the potential of AG as a multi-system countermeasure is indeed worth pursuing, and that the requisite AG research needs to be supported more systematically by NASA. This presentation will review the issues discussed and recommendations made.

  17. EDITORIAL: The Eye and The Chip 2008 The Eye and The Chip 2008

    NASA Astrophysics Data System (ADS)

    Rizzo, Joseph F.; O'Malley, Edward R.; Hessburg, Philip C.

    2009-06-01

    Over the course of the past decade, The Eye and The Chip world congress on visual neuro-prosthetic devices has become a premier meeting for those who believe that 'artificial' vision will one day be used to improve the quality of life of visually impaired patients. Although substantial progress has been made, there are numerous unresolved issues, like the preferred methods for wireless communication, placement of devices, and materials and design among others. The Eye and The Chip meeting of 2008, held in Detroit on 12-14 June 2008, provided important new information about these and other important topics, and thus served to advance this field of scientific research. From a research seedling a decade ago to the crowd of superb presentations in Detroit last June, a very real sense of justifiable optimism has developed. The prospects of artificial vision are no longer remote. Many of the researchers expressed confidence that implantable devices will provide the hoped-for level of vision to justify their widespread use in the future. The often dramatic successes of cochlear implants continues to provide credence that artificial stimulation of nerve tissue is a plausible strategy to restore vision. The Eye and The Chip 2008 attracted researchers from four continents (North America, Europe, Asia and Australia). The meeting also benefited from the attendance and presentations by representatives of the FDA, who have been present for all The Eye and The Chip meetings. The 2008 meeting was also enhanced by the inclusion of a new and related scientific field that shares the goal of restoring vision to the blind—the field of molecular restoration of retinal function by insertion of channelrhodopsin. Just as the field of ophthalmology went from Ridley's primitive intraocular lens replacement to implants useful in virtually every cataract patient in one surgeon's clinical lifetime, the field of retinal prostheses seems to be following a very similar trajectory. Likewise, the field of visual prosthetics continues to amass evidence that suggests that its long-term future is promising. We are grateful to the scientists who made the congress a success, to the Journal of Neural Engineering for organizing this special issue, to the financial supporters who made the congress possible and to the Detroit Institute of Ophthalmology staff who worked tirelessly and without complaint to bring home a superb congress. We invite you to attend the next The Eye and The Chip meeting, which will be held in 2011.

  18. Biologically inspired robotic inspectors: the engineering reality and future outlook (Keynote address)

    NASA Astrophysics Data System (ADS)

    Bar-Cohen, Yoseph

    2005-04-01

    Human errors have long been recognized as a major factor in the reliability of nondestructive evaluation results. To minimize such errors, there is an increasing reliance on automatic inspection tools that allow faster and consistent tests. Crawlers and various manipulation devices are commonly used to perform variety of inspection procedures that include C-scan with contour following capability to rapidly inspect complex structures. The emergence of robots has been the result of the need to deal with parts that are too complex to handle by a simple automatic system. Economical factors are continuing to hamper the wide use of robotics for inspection applications however technology advances are increasingly changing this paradigm. Autonomous robots, which may look like human, can potentially address the need to inspect structures with configuration that are not predetermined. The operation of such robots that mimic biology may take place at harsh or hazardous environments that are too dangerous for human presence. Biomimetic technologies such as artificial intelligence, artificial muscles, artificial vision and numerous others are increasingly becoming common engineering tools. Inspired by science fiction, making biomimetic robots is increasingly becoming an engineering reality and in this paper the state-of-the-art will be reviewed and the outlook for the future will be discussed.

  19. 76 FR 11847 - Thirteenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-03

    ... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...

  20. 76 FR 20437 - Fourteenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-12

    ... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...

  1. Artificial intelligence and signal processing for infrastructure assessment

    NASA Astrophysics Data System (ADS)

    Assaleh, Khaled; Shanableh, Tamer; Yehia, Sherif

    2015-04-01

    The Ground Penetrating Radar (GPR) is being recognized as an effective nondestructive evaluation technique to improve the inspection process. However, data interpretation and complexity of the results impose some limitations on the practicality of using this technique. This is mainly due to the need of a trained experienced person to interpret images obtained by the GPR system. In this paper, an algorithm to classify and assess the condition of infrastructures utilizing image processing and pattern recognition techniques is discussed. Features extracted form a dataset of images of defected and healthy slabs are used to train a computer vision based system while another dataset is used to evaluate the proposed algorithm. Initial results show that the proposed algorithm is able to detect the existence of defects with about 77% success rate.

  2. An integrated dexterous robotic testbed for space applications

    NASA Technical Reports Server (NTRS)

    Li, Larry C.; Nguyen, Hai; Sauer, Edward

    1992-01-01

    An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions.

  3. Compact Microscope Imaging System with Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    The figure presents selected views of a compact microscope imaging system (CMIS) that includes a miniature video microscope, a Cartesian robot (a computer- controlled three-dimensional translation stage), and machine-vision and control subsystems. The CMIS was built from commercial off-the-shelf instrumentation, computer hardware and software, and custom machine-vision software. The machine-vision and control subsystems include adaptive neural networks that afford a measure of artificial intelligence. The CMIS can perform several automated tasks with accuracy and repeatability . tasks that, heretofore, have required the full attention of human technicians using relatively bulky conventional microscopes. In addition, the automation and control capabilities of the system inherently include a capability for remote control. Unlike human technicians, the CMIS is not at risk of becoming fatigued or distracted: theoretically, it can perform continuously at the level of the best human technicians. In its capabilities for remote control and for relieving human technicians of tedious routine tasks, the CMIS is expected to be especially useful in biomedical research, materials science, inspection of parts on industrial production lines, and space science. The CMIS can automatically focus on and scan a microscope sample, find areas of interest, record the resulting images, and analyze images from multiple samples simultaneously. Automatic focusing is an iterative process: The translation stage is used to move the microscope along its optical axis in a succession of coarse, medium, and fine steps. A fast Fourier transform (FFT) of the image is computed at each step, and the FFT is analyzed for its spatial-frequency content. The microscope position that results in the greatest dispersal of FFT content toward high spatial frequencies (indicating that the image shows the greatest amount of detail) is deemed to be the focal position.

  4. A High-Speed Target-Free Vision-Based Sensor for Bus Rapid Transit Viaduct Vibration Measurements Using CMT and ORB Algorithms.

    PubMed

    Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan

    2017-06-06

    Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable.

  5. A High-Speed Target-Free Vision-Based Sensor for Bus Rapid Transit Viaduct Vibration Measurements Using CMT and ORB Algorithms

    PubMed Central

    Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan

    2017-01-01

    Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable. PMID:28587275

  6. New Visions for the Developmental Assessment of Infants and Young Children.

    ERIC Educational Resources Information Center

    Rocco, Susan

    This paper describes the process of developmental assessment of infants and young children from a parent's perspective. The artificial and arbitrary nature of most assessments is bemoaned as failing to adequately recognize children's abilities that may be readily evident to parents in more natural settings. Assessments emphasizing children's…

  7. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.

    PubMed

    Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui

    2017-01-01

    Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  8. Stereoacuity versus fixation disparity as indicators for vergence accuracy under prismatic stress.

    PubMed

    Kromeier, Miriam; Schmitt, Christina; Bach, Michael; Kommerell, Guntram

    2003-01-01

    Fixation disparity has been widely used as an indicator for vergence accuracy under prismatic stress. However, the targets used for measuring fixation disparity contain artificial features in that the fusional contours are thinned out. We considered that stereoacuity might be a preferable indicator of vergence accuracy, as stereo targets represent natural viewing conditions. We measured fixation disparity with a computer adaptation of Ogle's test and stereoacuity with the automatic Freiburg Stereoacuity Test. Eight subjects were examined under increasing base-in and base-out prisms. The response of fixation disparity to prismatic stress revealed the curve types described by Ogle and Crone. All eight subjects reached a stereoscopic threshold below 10 arcsec. In seven subjects the stereoscopic threshold increased before double vision occurred. Our data suggest that stereoacuity is suitable to assess the range of binocular vision under prismatic stress. As stereoacuity bears the advantage over fixation disparity in that it can be measured without introducing artificial viewing conditions, we suggest exploring whether stereoacuity under prismatic stress would be more meaningful in the work-up of asthenopic patients than is fixation disparity.

  9. Artificial Intelligence and the 'Good Society': the US, EU, and UK approach.

    PubMed

    Cath, Corinne; Wachter, Sandra; Mittelstadt, Brent; Taddeo, Mariarosaria; Floridi, Luciano

    2018-04-01

    In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a 'good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a 'good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a 'good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.

  10. Intelligent Color Vision System for Ripeness Classification of Oil Palm Fresh Fruit Bunch

    PubMed Central

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Halim, Zaini Abdul; Ibrahim, Haidi; Ali, Syed Salim Syed

    2012-01-01

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category. PMID:23202043

  11. Intelligent color vision system for ripeness classification of oil palm fresh fruit bunch.

    PubMed

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Abdul Halim, Zaini; Ibrahim, Haidi; Syed Ali, Syed Salim

    2012-10-22

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category.

  12. The Flight Telerobotic Servicer (FTS) - A focus for automation and robotics on the Space Station

    NASA Technical Reports Server (NTRS)

    Hinkal, Sanford W.; Andary, James F.; Watzin, James G.; Provost, David E.

    1987-01-01

    The concept, fundamental design principles, and capabilities of the FTS, a multipurpose telerobotic system for use on the Space Station and Space Shuttle, are discussed. The FTS is intended to assist the crew in the performance of extravehicular tasks; the telerobot will also be used on the Orbital Maneuvering Vehicle to service free-flyer spacecraft. The FTS will be capable of both teleoperation and autonomous operation; eventually it may also utilize ground control. By careful selection of the functional architecture and a modular approach to the hardware and software design, the FTS can accept developments in artificial intelligence and newer, more advanced sensors, such as machine vision and collision avoidance.

  13. Geometric Bioinspired Networks for Recognition of 2-D and 3-D Low-Level Structures and Transformations.

    PubMed

    Bayro-Corrochano, Eduardo; Vazquez-Santacruz, Eduardo; Moya-Sanchez, Eduardo; Castillo-Munis, Efrain

    2016-10-01

    This paper presents the design of radial basis function geometric bioinspired networks and their applications. Until now, the design of neural networks has been inspired by the biological models of neural networks but mostly using vector calculus and linear algebra. However, these designs have never shown the role of geometric computing. The question is how biological neural networks handle complex geometric representations involving Lie group operations like rotations. Even though the actual artificial neural networks are biologically inspired, they are just models which cannot reproduce a plausible biological process. Until now researchers have not shown how, using these models, one can incorporate them into the processing of geometric computing. Here, for the first time in the artificial neural networks domain, we address this issue by designing a kind of geometric RBF using the geometric algebra framework. As a result, using our artificial networks, we show how geometric computing can be carried out by the artificial neural networks. Such geometric neural networks have a great potential in robot vision. This is the most important aspect of this contribution to propose artificial geometric neural networks for challenging tasks in perception and action. In our experimental analysis, we show the applicability of our geometric designs, and present interesting experiments using 2-D data of real images and 3-D screw axis data. In general, our models should be used to process different types of inputs, such as visual cues, touch (texture, elasticity, temperature), taste, and sound. One important task of a perception-action system is to fuse a variety of cues coming from the environment and relate them via a sensor-motor manifold with motor modules to carry out diverse reasoned actions.

  14. Improving Pattern Recognition and Neural Network Algorithms with Applications to Solar Panel Energy Optimization

    NASA Astrophysics Data System (ADS)

    Zamora Ramos, Ernesto

    Artificial Intelligence is a big part of automation and with today's technological advances, artificial intelligence has taken great strides towards positioning itself as the technology of the future to control, enhance and perfect automation. Computer vision includes pattern recognition and classification and machine learning. Computer vision is at the core of decision making and it is a vast and fruitful branch of artificial intelligence. In this work, we expose novel algorithms and techniques built upon existing technologies to improve pattern recognition and neural network training, initially motivated by a multidisciplinary effort to build a robot that helps maintain and optimize solar panel energy production. Our contributions detail an improved non-linear pre-processing technique to enhance poorly illuminated images based on modifications to the standard histogram equalization for an image. While the original motivation was to improve nocturnal navigation, the results have applications in surveillance, search and rescue, medical imaging enhancing, and many others. We created a vision system for precise camera distance positioning motivated to correctly locate the robot for capture of solar panel images for classification. The classification algorithm marks solar panels as clean or dirty for later processing. Our algorithm extends past image classification and, based on historical and experimental data, it identifies the optimal moment in which to perform maintenance on marked solar panels as to minimize the energy and profit loss. In order to improve upon the classification algorithm, we delved into feedforward neural networks because of their recent advancements, proven universal approximation and classification capabilities, and excellent recognition rates. We explore state-of-the-art neural network training techniques offering pointers and insights, culminating on the implementation of a complete library with support for modern deep learning architectures, multilayer percepterons and convolutional neural networks. Our research with neural networks has encountered a great deal of difficulties regarding hyperparameter estimation for good training convergence rate and accuracy. Most hyperparameters, including architecture, learning rate, regularization, trainable parameters (or weights) initialization, and so on, are chosen via a trial and error process with some educated guesses. However, we developed the first quantitative method to compare weight initialization strategies, a critical hyperparameter choice during training, to estimate among a group of candidate strategies which would make the network converge to the highest classification accuracy faster with high probability. Our method provides a quick, objective measure to compare initialization strategies to select the best possible among them beforehand without having to complete multiple training sessions for each candidate strategy to compare final results.

  15. Banknote recognition: investigating processing and cognition framework using competitive neural network.

    PubMed

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-02-01

    Humans are apt at recognizing patterns and discovering even abstract features which are sometimes embedded therein. Our ability to use the banknotes in circulation for business transactions lies in the effortlessness with which we can recognize the different banknote denominations after seeing them over a period of time. More significant is that we can usually recognize these banknote denominations irrespective of what parts of the banknotes are exposed to us visually. Furthermore, our recognition ability is largely unaffected even when these banknotes are partially occluded. In a similar analogy, the robustness of intelligent systems to perform the task of banknote recognition should not collapse under some minimum level of partial occlusion. Artificial neural networks are intelligent systems which from inception have taken many important cues related to structure and learning rules from the human nervous/cognition processing system. Likewise, it has been shown that advances in artificial neural network simulations can help us understand the human nervous/cognition system even furthermore. In this paper, we investigate three cognition hypothetical frameworks to vision-based recognition of banknote denominations using competitive neural networks. In order to make the task more challenging and stress-test the investigated hypotheses, we also consider the recognition of occluded banknotes. The implemented hypothetical systems are tasked to perform fast recognition of banknotes with up to 75 % occlusion. The investigated hypothetical systems are trained on Nigeria's Naira banknotes and several experiments are performed to demonstrate the findings presented within this work.

  16. Light localization with low-contrast targets in a patient implanted with a suprachoroidal-transretinal stimulation retinal prosthesis.

    PubMed

    Endo, Takao; Fujikado, Takashi; Hirota, Masakazu; Kanda, Hiroyuki; Morimoto, Takeshi; Nishida, Kohji

    2018-04-20

    To evaluate the improvement in targeted reaching movements toward targets of various contrasts in a patient implanted with a suprachoroidal-transretinal stimulation (STS) retinal prosthesis. An STS retinal prosthesis was implanted in the right eye of a 42-year-old man with advanced Stargardt disease (visual acuity: right eye, light perception; left eye, hand motion). In localization tests during the 1-year follow-up period, the patient attempted to touch the center of a white square target (visual angle, 10°; contrast, 96, 85, or 74%) displayed at a random position on a monitor. The distance between the touched point and the center of the target (the absolute deviation) was averaged over 20 trials with the STS system on or off. With the left eye occluded, the absolute deviation was not consistently lower with the system on than off for high-contrast (96%) targets, but was consistently lower with the system on for low-contrast (74%) targets. With both eyes open, the absolute deviation was consistently lower with the system on than off for 85%-contrast targets. With the system on and 96%-contrast targets, we detected a shorter response time while covering the right eye, which was being implanted with the STS, compared to covering the left eye (2.41 ± 2.52 vs 8.45 ± 3.78 s, p < 0.01). Performance of a reaching movement improved in a patient with an STS retinal prosthesis implanted in an eye with residual natural vision. Patients with a retinal prosthesis may be able to improve their visual performance by using both artificial vision and their residual natural vision. Beginning date of the trial: Feb. 20, 2014 Date of registration: Jan. 4, 2014 Trial registration number: UMIN000012754 Registration site: UMIN Clinical Trials Registry (UMIN-CTR) http://www.umin.ac.jp/ctr/index.htm.

  17. Impulse processing: A dynamical systems model of incremental eye movements in the visual world paradigm

    PubMed Central

    Kukona, Anuenue; Tabor, Whitney

    2011-01-01

    The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355

  18. The Dept. of Energy Artificial Retina project

    ScienceCinema

    None

    2017-12-09

    LLNL has assisted in the development of the first long-term retinal prosthesis - called an artificial retina - that can function for years inside the harsh biological environment of the eye. This work has been done in collaboration with four national laboratories (Argonne, Los Alamos, Oak Ridge and Sandia), four universities (the California Institute of Technology, the Doheny Eye Institute at USC, North Carolina State University and the University of California, Santa Cruz), an industrial partner (Second Sight® Medical Products Inc. of Sylmar, Calif.) and the U.S. Department of Energy. With this device, application-specific integrated circuits transform digital images from a camera into electric signals in the eye that the brain uses to create a visual image. In clinical trials, patients with vision loss were able to successfully identify objects, increase mobility and detect movement using the artificial retina.

  19. A learning–based approach to artificial sensory feedback leads to optimal integration

    PubMed Central

    Dadarlat, Maria C.; O’Doherty, Joseph E.; Sabes, Philip N.

    2014-01-01

    Proprioception—the sense of the body’s position in space—plays an important role in natural movement planning and execution and will likewise be necessary for successful motor prostheses and Brain–Machine Interfaces (BMIs). Here, we demonstrated that monkeys could learn to use an initially unfamiliar multi–channel intracortical microstimulation (ICMS) signal, which provided continuous information about hand position relative to an unseen target, to complete accurate reaches. Furthermore, monkeys combined this artificial signal with vision to form an optimal, minimum–variance estimate of relative hand position. These results demonstrate that a learning–based approach can be used to provide a rich artificial sensory feedback signal, suggesting a new strategy for restoring proprioception to patients using BMIs as well as a powerful new tool for studying the adaptive mechanisms of sensory integration. PMID:25420067

  20. Computer vision-based method for classification of wheat grains using artificial neural network.

    PubMed

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  1. Artificial Retina Project: Final Report for CRADA ORNL 01-0625

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenbaum, E; Little, J

    The U.S. Department of Energy’s Artificial Retina Project is a collaborative, multi-institutional effort to develop an implantable microelectronic retinal prosthesis that restores useful vision to people blinded by retinal diseases. The ultimate goal of the project is to restore reading ability, facial recognition, and unaided mobility in people with retinitis pigmentosa and age-related macular degeneration. The project taps into the unique research technologies and resources developed at DOE national laboratories to surmount the many technical challenges involved with developing a safe, effective, and durable product. The research team includes six DOE national laboratories, four universities, and private industry.

  2. A neural computational model for animal's time-to-collision estimation.

    PubMed

    Wang, Ling; Yao, Dezhong

    2013-04-17

    The time-to-collision (TTC) is the time elapsed before a looming object hits the subject. An accurate estimation of TTC plays a critical role in the survival of animals in nature and acts as an important factor in artificial intelligence systems that depend on judging and avoiding potential dangers. The theoretic formula for TTC is 1/τ≈θ'/sin θ, where θ and θ' are the visual angle and its variation, respectively, and the widely used approximation computational model is θ'/θ. However, both of these measures are too complex to be implemented by a biological neuronal model. We propose a new simple computational model: 1/τ≈Mθ-P/(θ+Q)+N, where M, P, Q, and N are constants that depend on a predefined visual angle. This model, weighted summation of visual angle model (WSVAM), can achieve perfect implementation through a widely accepted biological neuronal model. WSVAM has additional merits, including a natural minimum consumption and simplicity. Thus, it yields a precise and neuronal-implemented estimation for TTC, which provides a simple and convenient implementation for artificial vision, and represents a potential visual brain mechanism.

  3. Una vision holistica de la educacion superior en contextos posmodernos (The Integrative Conception of Postmodern Higher Education).

    ERIC Educational Resources Information Center

    Zoreda, Margaret Lee; Zoreda-Lozano, Juan Jose

    This paper argues that current "postmodern" conditions demand a rethinking of higher education, especially in countries like Mexico. This would involve the rapprochement of science, technology, and the humanities, based on the belief that the chasm separating them, apart from being artificial, is no longer socially viable. Through a…

  4. Methods for Identifying Object Class, Type, and Orientation in the Presence of Uncertainty

    DTIC Science & Technology

    1990-08-01

    on Range Finding Techniques for Computer Vision," IEEE Trans. on Pattern Analysis and Machine Intellegence PAMI-5 (2), pp 129-139 March 1983. 15. Yang... Artificial Intelligence Applications, pp 199-205, December 1984. 16. Flynn, P.J. and Jain, A.K.," On Reliable Curvature Estimation, " Proceedings of the

  5. Collective Computation of Neural Network

    DTIC Science & Technology

    1990-03-15

    Sciences, Beijing ABSTRACT Computational neuroscience is a new branch of neuroscience originating from current research on the theory of computer...scientists working in artificial intelligence engineering and neuroscience . The paper introduces the collective computational properties of model neural...vision research. On this basis, the authors analyzed the significance of the Hopfield model. Key phrases: Computational Neuroscience , Neural Network, Model

  6. 77 FR 2342 - Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation..., Enhanced Flight Vision/ Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of the seventeenth meeting of RTCA Special Committee 213, Enhanced Flight Vision...

  7. 75 FR 38863 - Tenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-06

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight [[Page 38864

  8. New frontiers for intelligent content-based retrieval

    NASA Astrophysics Data System (ADS)

    Benitez, Ana B.; Smith, John R.

    2001-01-01

    In this paper, we examine emerging frontiers in the evolution of content-based retrieval systems that rely on an intelligent infrastructure. Here, we refer to intelligence as the capabilities of the systems to build and maintain situational or world models, utilize dynamic knowledge representation, exploit context, and leverage advanced reasoning and learning capabilities. We argue that these elements are essential to producing effective systems for retrieving audio-visual content at semantic levels matching those of human perception and cognition. In this paper, we review relevant research on the understanding of human intelligence and construction of intelligent system in the fields of cognitive psychology, artificial intelligence, semiotics, and computer vision. We also discus how some of the principal ideas form these fields lead to new opportunities and capabilities for content-based retrieval systems. Finally, we describe some of our efforts in these directions. In particular, we present MediaNet, a multimedia knowledge presentation framework, and some MPEG-7 description tools that facilitate and enable intelligent content-based retrieval.

  9. New frontiers for intelligent content-based retrieval

    NASA Astrophysics Data System (ADS)

    Benitez, Ana B.; Smith, John R.

    2000-12-01

    In this paper, we examine emerging frontiers in the evolution of content-based retrieval systems that rely on an intelligent infrastructure. Here, we refer to intelligence as the capabilities of the systems to build and maintain situational or world models, utilize dynamic knowledge representation, exploit context, and leverage advanced reasoning and learning capabilities. We argue that these elements are essential to producing effective systems for retrieving audio-visual content at semantic levels matching those of human perception and cognition. In this paper, we review relevant research on the understanding of human intelligence and construction of intelligent system in the fields of cognitive psychology, artificial intelligence, semiotics, and computer vision. We also discus how some of the principal ideas form these fields lead to new opportunities and capabilities for content-based retrieval systems. Finally, we describe some of our efforts in these directions. In particular, we present MediaNet, a multimedia knowledge presentation framework, and some MPEG-7 description tools that facilitate and enable intelligent content-based retrieval.

  10. Multi-Stage System for Automatic Target Recognition

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Lu, Thomas T.; Ye, David; Edens, Weston; Johnson, Oliver

    2010-01-01

    A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feedforward back-propagation neural network (NN) is then trained to classify each feature vector and to remove false positives. The system parameter optimizations process has been developed to adapt to various targets and datasets. The objective was to design an efficient computer vision system that can learn to detect multiple targets in large images with unknown backgrounds. Because the target size is small relative to the image size in this problem, there are many regions of the image that could potentially contain the target. A cursory analysis of every region can be computationally efficient, but may yield too many false positives. On the other hand, a detailed analysis of every region can yield better results, but may be computationally inefficient. The multi-stage ATR system was designed to achieve an optimal balance between accuracy and computational efficiency by incorporating both models. The detection stage first identifies potential ROIs where the target may be present by performing a fast Fourier domain OT-MACH filter-based correlation. Because threshold for this stage is chosen with the goal of detecting all true positives, a number of false positives are also detected as ROIs. The verification stage then transforms the regions of interest into feature space, and eliminates false positives using an artificial neural network classifier. The multi-stage system allows tuning the detection sensitivity and the identification specificity individually in each stage. It is easier to achieve optimized ATR operation based on its specific goal. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar and video image datasets.

  11. Identification of cataract and post-cataract surgery optical images using artificial intelligence techniques.

    PubMed

    Acharya, Rajendra Udyavara; Yu, Wenwei; Zhu, Kuanyi; Nayak, Jagadish; Lim, Teik-Cheng; Chan, Joey Yiptong

    2010-08-01

    Human eyes are most sophisticated organ, with perfect and interrelated subsystems such as retina, pupil, iris, cornea, lens and optic nerve. The eye disorder such as cataract is a major health problem in the old age. Cataract is formed by clouding of lens, which is painless and developed slowly over a long period. Cataract will slowly diminish the vision leading to the blindness. At an average age of 65, it is most common and one third of the people of this age in world have cataract in one or both the eyes. A system for detection of the cataract and to test for the efficacy of the post-cataract surgery using optical images is proposed using artificial intelligence techniques. Images processing and Fuzzy K-means clustering algorithm is applied on the raw optical images to detect the features specific to three classes to be classified. Then the backpropagation algorithm (BPA) was used for the classification. In this work, we have used 140 optical image belonging to the three classes. The ANN classifier showed an average rate of 93.3% in detecting normal, cataract and post cataract optical images. The system proposed exhibited 98% sensitivity and 100% specificity, which indicates that the results are clinically significant. This system can also be used to test the efficacy of the cataract operation by testing the post-cataract surgery optical images.

  12. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  13. Application of hybrid artificial fish swarm algorithm based on similar fragments in VRP

    NASA Astrophysics Data System (ADS)

    Che, Jinnuo; Zhou, Kang; Zhang, Xueyu; Tong, Xin; Hou, Lingyun; Jia, Shiyu; Zhen, Yiting

    2018-03-01

    Focused on the issue that the decrease of convergence speed and the precision of calculation at the end of the process in Artificial Fish Swarm Algorithm(AFSA) and instability of results, a hybrid AFSA based on similar fragments is proposed. Traditional AFSA enjoys a lot of obvious advantages in solving complex optimization problems like Vehicle Routing Problem(VRP). AFSA have a few limitations such as low convergence speed, low precision and instability of results. In this paper, two improvements are introduced. On the one hand, change the definition of the distance for artificial fish, as well as increase vision field of artificial fish, and the problem of speed and precision can be improved when solving VRP. On the other hand, mix artificial bee colony algorithm(ABC) into AFSA - initialize the population of artificial fish by the ABC, and it solves the problem of instability of results in some extend. The experiment results demonstrate that the optimal solution of the hybrid AFSA is easier to approach the optimal solution of the standard database than the other two algorithms. In conclusion, the hybrid algorithm can effectively solve the problem that instability of results and decrease of convergence speed and the precision of calculation at the end of the process.

  14. Engineering a Light-Attenuating Artificial Iris

    PubMed Central

    Shareef, Farah J.; Sun, Shan; Kotecha, Mrignayani; Kassem, Iris; Azar, Dimitri; Cho, Michael

    2016-01-01

    Purpose Discomfort from light exposure leads to photophobia, glare, and poor vision in patients with congenital or trauma-induced iris damage. Commercial artificial iris lenses are static in nature to provide aesthetics without restoring the natural iris's dynamic response to light. A new photo-responsive artificial iris was therefore developed using a photochromic material with self-adaptive light transmission properties and encased in a transparent biocompatible polymer matrix. Methods The implantable artificial iris was designed and engineered using Photopia, a class of photo-responsive materials (termed naphthopyrans) embedded in polyethylene. Photopia was reshaped into annular disks that were spin-coated with polydimethylsiloxane (PDMS) to form our artificial iris lens of controlled thickness. Results Activated by UV and blue light in approximately 5 seconds with complete reversal in less than 1 minute, the artificial iris demonstrates graded attenuation of up to 40% of visible and 60% of UV light. There optical characteristics are suitable to reversibly regulate the incident light intensity. In vitro cell culture experiments showed up to 60% cell death within 10 days of exposure to Photopia, but no significant cell death observed when cultured with the artificial iris with protective encapsulation. Nuclear magnetic resonance spectroscopy confirmed these results as there was no apparent leakage of potentially toxic photochromic material from the ophthalmic device. Conclusions Our artificial iris lens mimics the functionality of the natural iris by attenuating light intensity entering the eye with its rapid reversible change in opacity and thus potentially providing an improved treatment option for patients with iris damage. PMID:27116547

  15. Penguin colony attendance under artificial lights for ecotourism.

    PubMed

    Rodríguez, Airam; Holmberg, Ross; Dann, Peter; Chiaradia, André

    2018-03-30

    Wildlife watching is an emerging ecotourism activity around the world. In Australia and New Zealand, night viewing of little penguins attracts hundreds of thousands of visitors per year. As penguins start coming ashore after sunset, artificial lighting is essential to allow visitors to view them in the dark. This alteration of the nightscape warrants investigation for any potential effects of artificial lighting on penguin behavior. We experimentally tested how penguins respond to different light wavelengths (colors) and intensities to examine effects on the colony attendance behavior at two sites on Phillip Island, Australia. At one site, nocturnal artificial illumination has been used for penguin viewing for decades, whereas at the other site, the only light is from the natural night sky. Light intensity did not affect colony attendance behaviors of penguins at the artificially lit site, probably due to penguin habituation to lights. At the not previously lit site, penguins preferred lit paths over dark paths to reach their nests. Thus, artificial light might enhance penguin vision at night and consequently it might reduce predation risk and energetic costs of locomotion through obstacle and path detection. Although penguins are faithful to their path, they can be drawn to artificial lights at small spatial scale, so light pollution could attract penguins to undesirable lit areas. When artificial lighting is required, we recommend keeping lighting as dim and time-restricted as possible to mitigate any negative effects on the behavior of penguins and their natural habitat. © 2018 Wiley Periodicals, Inc.

  16. Engineering a Light-Attenuating Artificial Iris.

    PubMed

    Shareef, Farah J; Sun, Shan; Kotecha, Mrignayani; Kassem, Iris; Azar, Dimitri; Cho, Michael

    2016-04-01

    Discomfort from light exposure leads to photophobia, glare, and poor vision in patients with congenital or trauma-induced iris damage. Commercial artificial iris lenses are static in nature to provide aesthetics without restoring the natural iris's dynamic response to light. A new photo-responsive artificial iris was therefore developed using a photochromic material with self-adaptive light transmission properties and encased in a transparent biocompatible polymer matrix. The implantable artificial iris was designed and engineered using Photopia, a class of photo-responsive materials (termed naphthopyrans) embedded in polyethylene. Photopia was reshaped into annular disks that were spin-coated with polydimethylsiloxane (PDMS) to form our artificial iris lens of controlled thickness. Activated by UV and blue light in approximately 5 seconds with complete reversal in less than 1 minute, the artificial iris demonstrates graded attenuation of up to 40% of visible and 60% of UV light. There optical characteristics are suitable to reversibly regulate the incident light intensity. In vitro cell culture experiments showed up to 60% cell death within 10 days of exposure to Photopia, but no significant cell death observed when cultured with the artificial iris with protective encapsulation. Nuclear magnetic resonance spectroscopy confirmed these results as there was no apparent leakage of potentially toxic photochromic material from the ophthalmic device. Our artificial iris lens mimics the functionality of the natural iris by attenuating light intensity entering the eye with its rapid reversible change in opacity and thus potentially providing an improved treatment option for patients with iris damage.

  17. Object knowledge modulates colour appearance.

    PubMed

    Witzel, Christoph; Valkova, Hanna; Hansen, Thorsten; Gegenfurtner, Karl R

    2011-01-01

    We investigated the memory colour effect for colour diagnostic artificial objects. Since knowledge about these objects and their colours has been learned in everyday life, these stimuli allow the investigation of the influence of acquired object knowledge on colour appearance. These investigations are relevant for questions about how object and colour information in high-level vision interact as well as for research about the influence of learning and experience on perception in general. In order to identify suitable artificial objects, we developed a reaction time paradigm that measures (subjective) colour diagnosticity. In the main experiment, participants adjusted sixteen such objects to their typical colour as well as to grey. If the achromatic object appears in its typical colour, then participants should adjust it to the opponent colour in order to subjectively perceive it as grey. We found that knowledge about the typical colour influences the colour appearance of artificial objects. This effect was particularly strong along the daylight axis.

  18. Humanlike robots: the upcoming revolution in robotics

    NASA Astrophysics Data System (ADS)

    Bar-Cohen, Yoseph

    2009-08-01

    Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.

  19. General visual robot controller networks via artificial evolution

    NASA Astrophysics Data System (ADS)

    Cliff, David; Harvey, Inman; Husbands, Philip

    1993-08-01

    We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.

  20. Humanlike Robots - The Upcoming Revolution in Robotics

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph

    2009-01-01

    Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.

  1. Object knowledge modulates colour appearance

    PubMed Central

    Witzel, Christoph; Valkova, Hanna; Hansen, Thorsten; Gegenfurtner, Karl R

    2011-01-01

    We investigated the memory colour effect for colour diagnostic artificial objects. Since knowledge about these objects and their colours has been learned in everyday life, these stimuli allow the investigation of the influence of acquired object knowledge on colour appearance. These investigations are relevant for questions about how object and colour information in high-level vision interact as well as for research about the influence of learning and experience on perception in general. In order to identify suitable artificial objects, we developed a reaction time paradigm that measures (subjective) colour diagnosticity. In the main experiment, participants adjusted sixteen such objects to their typical colour as well as to grey. If the achromatic object appears in its typical colour, then participants should adjust it to the opponent colour in order to subjectively perceive it as grey. We found that knowledge about the typical colour influences the colour appearance of artificial objects. This effect was particularly strong along the daylight axis. PMID:23145224

  2. Analytical Cost Metrics : Days of Future Past

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov

    As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less

  3. Cross-Modal Correspondence Among Vision, Audition, and Touch in Natural Objects: An Investigation of the Perceptual Properties of Wood.

    PubMed

    Kanaya, Shoko; Kariya, Kenji; Fujisaki, Waka

    2016-10-01

    Certain systematic relationships are often assumed between information conveyed from multiple sensory modalities; for instance, a small figure and a high pitch may be perceived as more harmonious. This phenomenon, termed cross-modal correspondence, may result from correlations between multi-sensory signals learned in daily experience of the natural environment. If so, we would observe cross-modal correspondences not only in the perception of artificial stimuli but also in perception of natural objects. To test this hypothesis, we reanalyzed data collected previously in our laboratory examining perceptions of the material properties of wood using vision, audition, and touch. We compared participant evaluations of three perceptual properties (surface brightness, sharpness of sound, and smoothness) of the wood blocks obtained separately via vision, audition, and touch. Significant positive correlations were identified for all properties in the audition-touch comparison, and for two of the three properties regarding in the vision-touch comparison. By contrast, no properties exhibited significant positive correlations in the vision-audition comparison. These results suggest that we learn correlations between multi-sensory signals through experience; however, the strength of this statistical learning is apparently dependent on the particular combination of sensory modalities involved. © The Author(s) 2016.

  4. Parametric Study of Diffusion-Enhancement Networks for Spatiotemporal Grouping in Real-Time Artificial Vision

    DTIC Science & Technology

    1991-06-04

    1972). 2. M. Wertheimer. Experimentelle studien uber das sehen von bewegung. Z. fur Psych. 61, 161-265 (1912). Translated in part in Classics in...REFERENCES (Continued) 17. J.P.H. van Santen-and G. Sperling . "Elaborated Reichardt detectors." J. Opt. Soc. America A2, 300-321 (1985). 18. A.M. Waxman. J

  5. Testing meta tagger

    DTIC Science & Technology

    2017-12-21

    rank , and computer vision. Machine learning is closely related to (and often overlaps with) computational statistics, which also focuses on...Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.[1] Arthur Samuel...an American pioneer in the field of computer gaming and artificial intelligence, coined the term "Machine Learning " in 1959 while at IBM[2]. Evolved

  6. vitisFlower®: Development and Testing of a Novel Android-Smartphone Application for Assessing the Number of Grapevine Flowers per Inflorescence Using Artificial Vision Techniques.

    PubMed

    Aquino, Arturo; Millan, Borja; Gaston, Daniel; Diago, María-Paz; Tardaguila, Javier

    2015-08-28

    Grapevine flowering and fruit set greatly determine crop yield. This paper presents a new smartphone application for automatically counting, non-invasively and directly in the vineyard, the flower number in grapevine inflorescence photos by implementing artificial vision techniques. The application, called vitisFlower(®), firstly guides the user to appropriately take an inflorescence photo using the smartphone's camera. Then, by means of image analysis, the flowers in the image are detected and counted. vitisFlower(®) has been developed for Android devices and uses the OpenCV libraries to maximize computational efficiency. The application was tested on 140 inflorescence images of 11 grapevine varieties taken with two different devices. On average, more than 84% of flowers in the captures were found, with a precision exceeding 94%. Additionally, the application's efficiency on four different devices covering a wide range of the market's spectrum was also studied. The results of this benchmarking study showed significant differences among devices, although indicating that the application is efficiently usable even with low-range devices. vitisFlower is one of the first applications for viticulture that is currently freely available on Google Play.

  7. A high-throughput screening approach to discovering good forms of biologically inspired visual representation.

    PubMed

    Pinto, Nicolas; Doukhan, David; DiCarlo, James J; Cox, David D

    2009-11-01

    While many models of biological object recognition share a common set of "broad-stroke" properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model--e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct "parts" have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision.

  8. A High-Throughput Screening Approach to Discovering Good Forms of Biologically Inspired Visual Representation

    PubMed Central

    Pinto, Nicolas; Doukhan, David; DiCarlo, James J.; Cox, David D.

    2009-01-01

    While many models of biological object recognition share a common set of “broad-stroke” properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model—e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct “parts” have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision. PMID:19956750

  9. Lameness scoring system for dairy cows using force plates and artificial intelligence.

    PubMed

    Ghotoorlar, S Mokaram; Ghamsari, S Mehdi; Nowrouzian, I; Ghotoorlar, S Mokaram; Ghidary, S Shiry

    2012-02-04

    Lameness scoring is a routine procedure in dairy industry to screen the herds for new cases of lameness. Subjective lameness scoring, which is the most popular lameness detection and screening method in dairy herds, has several limitations. They include low intra-observer and inter-observer agreement and the discrete nature of the scores which limits its usage in monitoring the lameness. The aim of this study is to develop an automated lameness scoring system comparable with conventional subjective lameness scoring by means of artificial neural networks. The system is composed of four balanced force plates installed in a hoof-trimming box. A group of 105 dairy cows was used for the study. Twenty-three features extracted from ground reaction force (GRF) data were used in a computer training process which was performed on 60 per cent of the data. The remaining 40 per cent of the data were used to test the trained system. Repeatability of the lameness scoring system was determined by GRF samples from 25 cows, captured at two different times from the same animals. The mean sd was 0.31 and the mean coefficient of variation was 14.55 per cent, which represents a high repeatability in comparison with subjective vision-based scoring methods. Although the highest sensitivity and specificity values were seen in locomotion score groups 1 and 4, the automatic lameness system was both sensitive and specific in all groups. The sensitivity and specificity were higher than 72 per cent in locomotion score groups 1 to 4, and it was 100 per cent specific and 50 per cent sensitive for group 5.

  10. Clinical Tests of Ultra-Low Vision Used to Evaluate Rudimentary Visual Perceptions Enabled by the BrainPort Vision Device.

    PubMed

    Nau, Amy; Bach, Michael; Fisher, Christopher

    2013-01-01

    We evaluated whether existing ultra-low vision tests are suitable for measuring outcomes using sensory substitution. The BrainPort is a vision assist device coupling a live video feed with an electrotactile tongue display, allowing a user to gain information about their surroundings. We enrolled 30 adult subjects (age range 22-74) divided into two groups. Our blind group included 24 subjects ( n = 16 males and n = 8 females, average age 50) with light perception or worse vision. Our control group consisted of six subjects ( n = 3 males, n = 3 females, average age 43) with healthy ocular status. All subjects performed 11 computer-based psychophysical tests from three programs: Basic Assessment of Light Motion, Basic Assessment of Grating Acuity, and the Freiburg Vision Test as well as a modified Tangent Screen. Assessments were performed at baseline and again using the BrainPort after 15 hours of training. Most tests could be used with the BrainPort. Mean success scores increased for all of our tests except contrast sensitivity. Increases were statistically significant for tests of light perception (8.27 ± 3.95 SE), time resolution (61.4% ± 3.14 SE), light localization (44.57% ± 3.58 SE), grating orientation (70.27% ± 4.64 SE), and white Tumbling E on a black background (2.49 logMAR ± 0.39 SE). Motion tests were limited by BrainPort resolution. Tactile-based sensory substitution devices are amenable to psychophysical assessments of vision, even though traditional visual pathways are circumvented. This study is one of many that will need to be undertaken to achieve a common outcomes infrastructure for the field of artificial vision.

  11. Looking back, looking forward and aiming higher: the next generation's visions of the next 50 years in space

    NASA Astrophysics Data System (ADS)

    Thakore, B.; Frierson, T.; Coderre, K.; Lukaszczyk, A.; Karl, A.

    2009-04-01

    This paper outlines the response of students and young space professionals on the occasion of the 50th Anniversary of the first artificial satellite and the 40th anniversary of the Outer Space Treaty. The contribution has been coordinated by the Space Generation Advisory Council (SGAC) in support of the United Nations Programme on Space Applications. It follows consultation of the SGAC community through a series of meetings, online discussions and online surveys. The first two online surveys collected over 750 different visions from the international community, totaling approximately 276 youth from over 28 countries and builds on previous SGAC policy contributions. A summary of these results was presented as the top 10 visions of today's youth as an invited input to world space leaders gathered at the Symposium on "The future of space exploration: Solutions to earthly problems" held in Boston, USA from April 12-14 2007 and at the United Nations Committee on the Peaceful Uses of Outer Space in May 2007. These key visions suggested the enhancement for humanity's reach beyond this planet - both physically and intellectual. These key visions were themed into three main categories: • Improvement of Human Survival Probability - sustained exploration to become a multi- planet species, humans to Mars, new treaty structures to ensure a secure space environment, etc • Improvement of Human Quality of Life and the Environment - new political systems or astrocracy, benefits of tele-medicine, tele-education, and commercialization of space, new energy and resources: space solar power, etc. • Improvement of Human Knowledge and Understanding - complete survey of extinct and extant life forms, use of space data for advanced environmental monitoring, etc. This paper will summarize the outcomes from a further online survey and represent key recommendations given by international youth advocates on further steps that could be taken by space agencies and organizations to make the top 10 visions a reality. In turn the online discussions that are used to engage the youth audience would be recorded and would help to reflect the confidence of the younger generation in these visions. The categories listed above would also be investigated further from the technology, policy and ethical aspects. Recent activities in development to further disseminate the necessary connections between using of space technology for solving global challenges is discussed.

  12. Deep Learning: A Primer for Radiologists.

    PubMed

    Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene; Drozdzal, Michal; Turcotte, Simon; Pal, Christopher J; Kadoury, Samuel; Tang, An

    2017-01-01

    Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. © RSNA, 2017.

  13. How virtual reality works: illusions of vision in "real" and virtual environments

    NASA Astrophysics Data System (ADS)

    Stark, Lawrence W.

    1995-04-01

    Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.

  14. Hi-Vision telecine system using pickup tube

    NASA Astrophysics Data System (ADS)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  15. A Practical Solution Using A New Approach To Robot Vision

    NASA Astrophysics Data System (ADS)

    Hudson, David L.

    1984-01-01

    Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.

  16. Field Geology/Processes

    NASA Technical Reports Server (NTRS)

    Allen, Carlton; Jakes, Petr; Jaumann, Ralf; Marshall, John; Moses, Stewart; Ryder, Graham; Saunders, Stephen; Singer, Robert

    1996-01-01

    The field geology/process group examined the basic operations of a terrestrial field geologist and the manner in which these operations could be transferred to a planetary lander. Four basic requirements for robotic field geology were determined: geologic content; surface vision; mobility; and manipulation. Geologic content requires a combination of orbital and descent imaging. Surface vision requirements include range, resolution, stereo, and multispectral imaging. The minimum mobility for useful field geology depends on the scale of orbital imagery. Manipulation requirements include exposing unweathered surfaces, screening samples, and bringing samples in contact with analytical instruments. To support these requirements, several advanced capabilities for future development are recommended. Capabilities include near-infrared reflectance spectroscopy, hyper-spectral imaging, multispectral microscopy, artificial intelligence in support of imaging, x ray diffraction, x ray fluorescence, and rock chipping.

  17. Feature recognition and detection for ancient architecture based on machine vision

    NASA Astrophysics Data System (ADS)

    Zou, Zheng; Wang, Niannian; Zhao, Peng; Zhao, Xuefeng

    2018-03-01

    Ancient architecture has a very high historical and artistic value. The ancient buildings have a wide variety of textures and decorative paintings, which contain a lot of historical meaning. Therefore, the research and statistics work of these different compositional and decorative features play an important role in the subsequent research. However, until recently, the statistics of those components are mainly by artificial method, which consumes a lot of labor and time, inefficiently. At present, as the strong support of big data and GPU accelerated training, machine vision with deep learning as the core has been rapidly developed and widely used in many fields. This paper proposes an idea to recognize and detect the textures, decorations and other features of ancient building based on machine vision. First, classify a large number of surface textures images of ancient building components manually as a set of samples. Then, using the convolution neural network to train the samples in order to get a classification detector. Finally verify its precision.

  18. Visual Search Performance in Patients with Vision Impairment: A Systematic Review.

    PubMed

    Senger, Cassia; Margarido, Maria Rita Rodrigues Alves; De Moraes, Carlos Gustavo; De Fendi, Ligia Issa; Messias, André; Paula, Jayter Silva

    2017-11-01

    Patients with visual impairment are constantly facing challenges to achieve an independent and productive life, which depends upon both a good visual discrimination and search capacities. Given that visual search is a critical skill for several daily tasks and could be used as an index of the overall visual function, we investigated the relationship between vision impairment and visual search performance. A comprehensive search was undertaken using electronic PubMed, EMBASE, LILACS, and Cochrane databases from January 1980 to December 2016, applying the following terms: "visual search", "visual search performance", "visual impairment", "visual exploration", "visual field", "hemianopia", "search time", "vision lost", "visual loss", and "low vision". Two hundred seventy six studies from 12,059 electronic database files were selected, and 40 of them were included in this review. Studies included participants of all ages, both sexes, and the sample sizes ranged from 5 to 199 participants. Visual impairment was associated with worse visual search performance in several ophthalmologic conditions, which were either artificially induced, or related to specific eye and neurological diseases. This systematic review details all the described circumstances interfering with visual search tasks, highlights the need for developing technical standards, and outlines patterns for diagnosis and therapy using visual search capabilities.

  19. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  20. The influence of artificial scotomas on eye movements during visual search.

    PubMed

    Cornelissen, Frans W; Bruin, Klaas J; Kooijman, Aart C

    2005-01-01

    Fixation durations are normally adapted to the difficulty of the foveal analysis task. We examine to what extent artificial central and peripheral visual field defects interfere with this adaptation process. Subjects performed a visual search task while their eye movements were registered. The latter were used to drive a real-time gaze-dependent display that was used to create artificial central and peripheral visual field defects. Recorded eye movements were used to determine saccadic amplitude, number of fixations, fixation durations, return saccades, and changes in saccade direction. For central defects, although fixation duration increased with the size of the absolute central scotoma, this increase was too small to keep recognition performance optimal, evident from an associated increase in the rate of return saccades. Providing a relatively small amount of visual information in the central scotoma did substantially reduce subjects' search times but not their fixation durations. Surprisingly, reducing the size of the tunnel also prolonged fixation duration for peripheral defects. This manipulation also decreased the rate of return saccades, suggesting that the fixations were prolonged beyond the duration required by the foveal task. Although we find that adaptation of fixation duration to task difficulty clearly occurs in the presence of artificial scotomas, we also find that such field defects may render the adaptation suboptimal for the task at hand. Thus, visual field defects may not only hinder vision by limiting what the subject sees of the environment but also by limiting the visual system's ability to program efficient eye movements. We speculate this is because of how visual field defects bias the balance between saccade generation and fixation stabilization.

  1. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  2. Evaluation of endothelial mucin layer thickness after phacoemulsification with next generation ophthalmic irrigating solution.

    PubMed

    Ghate, Deepta A; Holley, Glenn; Dollinger, Harli; Bullock, Joseph P; Markwardt, Kerry; Edelhauser, Henry F

    2008-10-01

    To evaluate human corneal endothelial mucin layer thickness and ultrastructure after phacoemulsification and irrigation-aspiration with either next generation ophthalmic irrigating solution (NGOIS) or BSS PLUS. Paired human corneas were mounted in an artificial anterior chamber, exposed to 3 minutes of continuous ultrasound (US) at 80% power using the Alcon SERIES 20000 LEGACY surgical system (n = 9) or to 2 minutes of pulsed US at 50% power, 50% of the time at 20 pps using the Alcon INFINITI Vision System (n = 5), and irrigated with 250 mL of either NGOIS or BSS PLUS. A control group of paired corneas did not undergo phacoemulsification or irrigation-aspiration (n = 5). Corneas were divided and fixed for mucin staining or transmission electron microscopy. Mucin layer thickness was measured on the transmission electron microscopy prints. The mucin layer thickness in the continuous phaco group was 0.77 +/- 0.02 microm (mean +/- SE) with NGOIS and 0.51 +/- 0.01 microm with BSS PLUS (t test, P < 0.001). The mucin layer thickness in the pulsed phaco group was 0.79 +/- 0.02 microm with NGOIS and 0.54 +/- 0.01 microm with BSS PLUS (P < 0.001). The mucin layer thickness in the untreated control group was 0.72 +/- 0.02 microm. The endothelial ultrastructure was normal in all corneas. In this in vitro corneal model, NGOIS, due to its lower surface tension and higher viscosity, preserved endothelial mucin layer thickness better than BSS PLUS with both the INFINITI Vision System (pulsed US) and the LEGACY surgical system (continuous US).

  3. Online aptitude automatic surface quality inspection system for hot rolled strips steel

    NASA Astrophysics Data System (ADS)

    Lin, Jin; Xie, Zhi-jiang; Wang, Xue; Sun, Nan-Nan

    2005-12-01

    Defects on the surface of hot rolled steel strips are main factors to evaluate quality of steel strips, an improved image recognition algorithm are used to extract the feature of Defects on the surface of steel strips. Base on the Machine vision and Artificial Neural Networks, establish a defect recognition method to select defect on the surface of steel strips. Base on these research. A surface inspection system and advanced algorithms for image processing to hot rolled strips is developed. Preparing two different fashion to lighting, adopting line blast vidicon of CCD on the surface steel strips on-line. Opening up capacity-diagnose-system with level the surface of steel strips on line, toward the above and undersurface of steel strips with ferric oxide, injure, stamp etc of defects on the surface to analyze and estimate. Miscarriage of justice and alternate of justice rate not preponderate over 5%.Geting hold of applications on some big enterprises of steel at home. Experiment proved that this measure is feasible and effective.

  4. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  5. Color image processing and vision system for an automated laser paint-stripping system

    NASA Astrophysics Data System (ADS)

    Hickey, John M., III; Hise, Lawson

    1994-10-01

    Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.

  6. Artificial Intelligence in Surgery: Promises and Perils.

    PubMed

    Hashimoto, Daniel A; Rosman, Guy; Rus, Daniela; Meireles, Ozanan R

    2018-07-01

    The aim of this review was to summarize major topics in artificial intelligence (AI), including their applications and limitations in surgery. This paper reviews the key capabilities of AI to help surgeons understand and critically evaluate new AI applications and to contribute to new developments. AI is composed of various subfields that each provide potential solutions to clinical problems. Each of the core subfields of AI reviewed in this piece has also been used in other industries such as the autonomous car, social networks, and deep learning computers. A review of AI papers across computer science, statistics, and medical sources was conducted to identify key concepts and techniques within AI that are driving innovation across industries, including surgery. Limitations and challenges of working with AI were also reviewed. Four main subfields of AI were defined: (1) machine learning, (2) artificial neural networks, (3) natural language processing, and (4) computer vision. Their current and future applications to surgical practice were introduced, including big data analytics and clinical decision support systems. The implications of AI for surgeons and the role of surgeons in advancing the technology to optimize clinical effectiveness were discussed. Surgeons are well positioned to help integrate AI into modern practice. Surgeons should partner with data scientists to capture data across phases of care and to provide clinical context, for AI has the potential to revolutionize the way surgery is taught and practiced with the promise of a future optimized for the highest quality patient care.

  7. A Comparative Study : Microprogrammed Vs Risc Architectures For Symbolic Processing

    NASA Astrophysics Data System (ADS)

    Heudin, J. C.; Metivier, C.; Demigny, D.; Maurin, T.; Zavidovique, B.; Devos, F.

    1987-05-01

    It is oftenclaimed that conventional computers are not well suited for human-like tasks : Vision (Image Processing), Intelligence (Symbolic Processing) ... In the particular case of Artificial Intelligence, dynamic type-checking is one example of basic task that must be improved. The solution implemented in most Lisp work-stations consists in a microprogrammed architecture with a tagged memory. Another way to gain efficiency is to design a well suited instruction set for symbolic processing, which reduces the semantic gap between the high level language and the machine code. In this framework, the RISC concept provides a convenient approach to study new architectures for symbolic processing. This paper compares both approaches and describes our projectof designing a compact symbolic processor for Artificial Intelligence applications.

  8. Technology transfer from the science of medicine to the real world: the potential role played by artificial adaptive systems.

    PubMed

    Grossi, Enzo

    2007-01-01

    The author describes a refiguration of medical thought that originates from nonlinear dynamics and chaos theory. The coupling of computer science and these new theoretical bases coming from complex systems mathematics allows the creation of "intelligent" agents capable of adapting themselves dynamically to problems of high complexity: the artificial neural networks (ANNs). ANNs are able to reproduce the dynamic interaction of multiple factors simultaneously, allowing the study of complexity; they can also draw conclusions on an individual basis and not as average trends. These tools can allow a more efficient technology transfer from the science of medicine to the real world, overcoming many obstacles responsible for the present translational failure. They also contribute to a new holistic vision of the human subject person, contrasting the statistical reductionism that tends to squeeze or even delete the single subject, sacrificing him to his group of belongingness. A remarkable contribution to this individual approach comes from fuzzy logic, according to which there are no sharp limits between opposite things, such as wealth and disease. This approach allows one to partially escape from the probability theory trap in situations where it is fundamental to express a judgement based on a single case and favor a novel humanism directed to the management of the patient as an individual subject person.

  9. Visual Reliance for Balance Control in Older Adults Persists When Visual Information Is Disrupted by Artificial Feedback Delays

    PubMed Central

    Balasubramaniam, Ramesh

    2014-01-01

    Sensory information from our eyes, skin and muscles helps guide and correct balance. Less appreciated, however, is that delays in the transmission of sensory information between our eyes, limbs and central nervous system can exceed several 10s of milliseconds. Investigating how these time-delayed sensory signals influence balance control is central to understanding the postural system. Here, we investigate how delayed visual feedback and cognitive performance influence postural control in healthy young and older adults. The task required that participants position their center of pressure (COP) in a fixed target as accurately as possible without visual feedback about their COP location (eyes-open balance), or with artificial time delays imposed on visual COP feedback. On selected trials, the participants also performed a silent arithmetic task (cognitive dual task). We separated COP time series into distinct frequency components using low and high-pass filtering routines. Visual feedback delays affected low frequency postural corrections in young and older adults, with larger increases in postural sway noted for the group of older adults. In comparison, cognitive performance reduced the variability of rapid center of pressure displacements in young adults, but did not alter postural sway in the group of older adults. Our results demonstrate that older adults prioritize vision to control posture. This visual reliance persists even when feedback about the task is delayed by several hundreds of milliseconds. PMID:24614576

  10. Infusion of innovative technologies for mission operations

    NASA Astrophysics Data System (ADS)

    Donati, Alessandro

    2010-11-01

    The Advanced Mission Concepts and Technologies Office (Mission Technologies Office, MTO for short) at the European Space Operations Centre (ESOC) of ESA is entrusted with research and development of innovative mission operations concepts systems and provides operations support to special projects. Visions of future missions and requests for improvements from currently flying missions are the two major sources of inspiration to conceptualize innovative or improved mission operations processes. They include monitoring and diagnostics, planning and scheduling, resource management and optimization. The newly identified operations concepts are then proved by means of prototypes, built with embedded, enabling technology and deployed as shadow applications in mission operations for an extended validation phase. The technology so far exploited includes informatics, artificial intelligence and operational research branches. Recent outstanding results include artificial intelligence planning and scheduling applications for Mars Express, advanced integrated space weather monitoring system for the Integral space telescope and a suite of growing client applications for MUST (Mission Utilities Support Tools). The research, development and validation activities at the Mission technologies office are performed together with a network of research institutes across Europe. The objective is narrowing the gap between enabling and innovative technology and space mission operations. The paper first addresses samples of technology infusion cases with their lessons learnt. The second part is focused on the process and the methodology used at the Mission technologies office to fulfill its objectives.

  11. A Model for Comparing Game Theory and Artificial Intelligence Decision Making Processes

    DTIC Science & Technology

    1989-12-01

    I V. Re-visions to PALANTIR Program .. .. .... .... .... ..... 20 4.1 Rewritten in the C Programming Lanigiiage...The PALANTIR , created by Capt Robert Palmer, had two uses: to train per- sonnel on the use of satellites and to provide insight into the movements...zero-sum game theory approach to reach decisions (11:23). This approach is discussed further in Chapter 2. 2 Although PALANTIR used game theory players

  12. Real-Time Mapping Using Stereoscopic Vision Optimization

    DTIC Science & Technology

    2005-03-01

    pinhole geometry . . . . . . . . . . . . . . 17 2.8. Artificially textured scenes . . . . . . . . . . . . . . . . . . . . 23 3.1. Bilbo the robot...geometry. 2.2.1 The Fundamental Matrix. The fundamental matrix (F) describes the relationship between a pair of 2D pictures of a 3D scene . This is...eight CCD cameras to compute a mesh model of the environment from a large number of overlapped 3D images. In [1,17], a range scanner is combined with a

  13. The relationship between the retinal image quality and the refractive index of defects arising in IOL: numerical analysis

    NASA Astrophysics Data System (ADS)

    Geniusz, Malwina

    2017-09-01

    The best treatment for cataract patients, which allows to restore clear vision is implanting an artificial intraocular lens (IOL). The image quality of the lens has a significant impact on the quality of patient's vision. After a long exposure the implant to aqueous environment some defects appear in the artificial lenses. The defects generated in the IOL have different refractive indices. For example, glistening phenomenon is based on light scattering on the oval microvacuoles filled with an aqueous humor which refractive index value is about 1.34. Calcium deposits are another example of lens defects and they can be characterized by the refractive index 1.63. In the presented studies it was calculated how the difference between the refractive indices of the defect and the refractive index of the lens material affects the quality of image. The OpticStudio Professional program (from Radiant Zemax, LLC) was used for the construction of the numerical model of the eye with IOL and to calculate the characteristics of the retinal image. Retinal image quality was described in such characteristics as Point Spread Function (PSF) and the Optical Transfer Function with amplitude and phase. The results show a strong correlation between the refractive indices difference and retinal image quality.

  14. vitisFlower®: Development and Testing of a Novel Android-Smartphone Application for Assessing the Number of Grapevine Flowers per Inflorescence Using Artificial Vision Techniques

    PubMed Central

    Aquino, Arturo; Millan, Borja; Gaston, Daniel; Diago, María-Paz; Tardaguila, Javier

    2015-01-01

    Grapevine flowering and fruit set greatly determine crop yield. This paper presents a new smartphone application for automatically counting, non-invasively and directly in the vineyard, the flower number in grapevine inflorescence photos by implementing artificial vision techniques. The application, called vitisFlower®, firstly guides the user to appropriately take an inflorescence photo using the smartphone’s camera. Then, by means of image analysis, the flowers in the image are detected and counted. vitisFlower® has been developed for Android devices and uses the OpenCV libraries to maximize computational efficiency. The application was tested on 140 inflorescence images of 11 grapevine varieties taken with two different devices. On average, more than 84% of flowers in the captures were found, with a precision exceeding 94%. Additionally, the application’s efficiency on four different devices covering a wide range of the market’s spectrum was also studied. The results of this benchmarking study showed significant differences among devices, although indicating that the application is efficiently usable even with low-range devices. vitisFlower is one of the first applications for viticulture that is currently freely available on Google Play. PMID:26343664

  15. Indoor Spatial Updating with Reduced Visual Information

    PubMed Central

    Legge, Gordon E.; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M.

    2016-01-01

    Purpose Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Methods Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. Results With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. Discussion If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment. PMID:26943674

  16. Indoor Spatial Updating with Reduced Visual Information.

    PubMed

    Legge, Gordon E; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M

    2016-01-01

    Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment.

  17. Wearable Improved Vision System for Color Vision Deficiency Correction

    PubMed Central

    Riccio, Daniel; Di Perna, Luigi; Sanniti Di Baja, Gabriella; De Nino, Maurizio; Rossi, Settimio; Testa, Francesco; Simonelli, Francesca; Frucci, Maria

    2017-01-01

    Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p = 0.03$ \\end{document}) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD. PMID:28507827

  18. Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform

    PubMed Central

    Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B.

    2016-01-01

    Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks. PMID:26909015

  19. Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform.

    PubMed

    Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B

    2016-01-01

    Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks.

  20. Effects of realistic force feedback in a robotic assisted minimally invasive surgery system.

    PubMed

    Moradi Dalvand, Mohsen; Shirinzadeh, Bijan; Nahavandi, Saeid; Smith, Julian

    2014-06-01

    Robotic assisted minimally invasive surgery systems not only have the advantages of traditional laparoscopic procedures but also restore the surgeon's hand-eye coordination and improve the surgeon's precision by filtering hand tremors. Unfortunately, these benefits have come at the expense of the surgeon's ability to feel. Several research efforts have already attempted to restore this feature and study the effects of force feedback in robotic systems. The proposed methods and studies have some shortcomings. The main focus of this research is to overcome some of these limitations and to study the effects of force feedback in palpation in a more realistic fashion. A parallel robot assisted minimally invasive surgery system (PRAMiSS) with force feedback capabilities was employed to study the effects of realistic force feedback in palpation of artificial tissue samples. PRAMiSS is capable of actually measuring the tip/tissue interaction forces directly from the surgery site. Four sets of experiments using only vision feedback, only force feedback, simultaneous force and vision feedback and direct manipulation were conducted to evaluate the role of sensory feedback from sideways tip/tissue interaction forces with a scale factor of 100% in characterising tissues of varying stiffness. Twenty human subjects were involved in the experiments for at least 1440 trials. Friedman and Wilcoxon signed-rank tests were employed to statistically analyse the experimental results. Providing realistic force feedback in robotic assisted surgery systems improves the quality of tissue characterization procedures. Force feedback capability also increases the certainty of characterizing soft tissues compared with direct palpation using the lateral sides of index fingers. The force feedback capability can improve the quality of palpation and characterization of soft tissues of varying stiffness by restoring sense of touch in robotic assisted minimally invasive surgery operations.

  1. The infection algorithm: an artificial epidemic approach for dense stereo correspondence.

    PubMed

    Olague, Gustavo; Fernández, Francisco; Pérez, Cynthia B; Lutton, Evelyne

    2006-01-01

    We present a new bio-inspired approach applied to a problem of stereo image matching. This approach is based on an artificial epidemic process, which we call the infection algorithm. The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D information that allows the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to produce only the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, which propagate like an artificial epidemic over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.

  2. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    PubMed

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  3. Crowding during restricted and free viewing

    PubMed Central

    Wallace, Julian M.; Chiu, Michael K.; Nandy, Anirvan S.; Tjan, Bosco S.

    2013-01-01

    Crowding impairs the perception of form in peripheral vision. It is likely to be a key limiting factor of form vision in patients without central vision. Crowding has been extensively studied in normally sighted individuals, typically with a stimulus duration of a few hundred milliseconds to avoid eye movements. These restricted testing conditions do not reflect the natural behavior of a patient with central field loss. Could unlimited stimulus duration and unrestricted eye movements change the properties of crowding in any fundamental way? We studied letter identification in the peripheral vision of normally sighted observers in three conditions: (i) a fixation condition with a brief stimulus presentation of 250 ms, (ii) another fixation condition but with an unlimited viewing time, and (iii) an unrestricted eye movement condition with an artificial central scotoma and an unlimited viewing time. In all conditions, contrast thresholds were measured as a function of target-to-flanker spacing, from which we estimated the spatial extent of crowding in terms of critical spacing. We found that presentation duration beyond 250 ms had little effect on critical spacing with stable gaze. With unrestricted eye movements and a simulated central scotoma, we found a large variability in critical spacing across observers, but more importantly, the variability in critical spacing was well correlated with the variability in target eccentricity. Our results assure that the large body of findings on crowding made with briefly presented stimuli remains relevant to conditions where viewing time is unconstrained. Our results further suggest that impaired oculomotor control associated with central vision loss can confound peripheral form vision beyond the limits imposed by crowding. PMID:23563172

  4. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  5. Vehicle autonomous localization in local area of coal mine tunnel based on vision sensors and ultrasonic sensors

    PubMed Central

    Yang, Wei; You, Kaiming; Li, Wei; Kim, Young-il

    2017-01-01

    This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel’s global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point’s plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel. PMID:28141829

  6. Vehicle autonomous localization in local area of coal mine tunnel based on vision sensors and ultrasonic sensors.

    PubMed

    Xu, Zirui; Yang, Wei; You, Kaiming; Li, Wei; Kim, Young-Il

    2017-01-01

    This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel's global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point's plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel.

  7. Progress in computer vision.

    NASA Astrophysics Data System (ADS)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  8. Shade matching performance of normal and color vision-deficient dental professionals with standard daylight and tungsten illuminants.

    PubMed

    Gokce, Hasan Suat; Piskin, Bulent; Ceyhan, Dogan; Gokce, Sila Mermut; Arisan, Volkan

    2010-03-01

    The lighting conditions of the environment and visual deficiencies such as red-green color vision deficiency affect the clinical shade matching performance of dental professionals. The purpose of this study was to evaluate the shade matching performance of normal and color vision-deficient dental professionals with standard daylight and tungsten illuminants. Two sets of porcelain disc replicas of 16 shade guide tabs (VITA Lumin) were manufactured to exact L*a*b* values by using a colorimeter. Then these twin porcelain discs (13 mm x 2.4 mm) were mixed up and placed into a color-matching cabinet that standardized the lighting conditions for the observation tests. Normal and red-green color vision-deficient dental professionals were asked to match the 32 porcelain discs using standard artificial daylight D65 (high color temperature) and tungsten filament lamp light (T) (low color temperature) illuminants. The results were analyzed by repeated-measures ANOVA and paired and independent samples t tests for the differences between dental professionals and differences between the illuminants (alpha=.05). Regarding the sum of the correct shade match scores of all observations with both illuminants, the difference between normal vision and red-green color vision-deficient dental professional groups was not statistically significant (F=4.132; P=.054). However, the correct shade match scores of each group were significantly different for each illuminant (P<.005). The correct shade matching scores of normal color vision dental professionals were significantly higher with D65 illuminant (t=7.004; P<.001). Color matching scores of red-green color vision-deficient dental professionals (approximately 5.7 more pairs than with D65) were significantly higher with T illuminant (t=5.977; P<.001). CONCLUSIONS.: Within the limitations of this study, the shade matching performance of dental professionals was affected by color vision deficiency and the color temperature of the illuminant. The color vision-deficient group was notably unsuccessful with the D65 illuminant in shade matching. In contrast, there was a significant increase in the shade matching performance of the color vision-deficient group with T illuminant. The lower color temperature illuminant dramatically decreased the normal color vision groups' correct shade matching score. (c) 2010 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  9. Human olfaction: a constant state of change-blindness

    PubMed Central

    Sela, Lee

    2010-01-01

    Paradoxically, although humans have a superb sense of smell, they don’t trust their nose. Furthermore, although human odorant detection thresholds are very low, only unusually high odorant concentrations spontaneously shift our attention to olfaction. Here we suggest that this lack of olfactory awareness reflects the nature of olfactory attention that is shaped by the spatial and temporal envelopes of olfaction. Regarding the spatial envelope, selective attention is allocated in space. Humans direct an attentional spotlight within spatial coordinates in both vision and audition. Human olfactory spatial abilities are minimal. Thus, with no olfactory space, there is no arena for olfactory selective attention. Regarding the temporal envelope, whereas vision and audition consist of nearly continuous input, olfactory input is discreet, made of sniffs widely separated in time. If similar temporal breaks are artificially introduced to vision and audition, they induce “change blindness”, a loss of attentional capture that results in a lack of awareness to change. Whereas “change blindness” is an aberration of vision and audition, the long inter-sniff-interval renders “change anosmia” the norm in human olfaction. Therefore, attentional capture in olfaction is minimal, as is human olfactory awareness. All this, however, does not diminish the role of olfaction through sub-attentive mechanisms allowing subliminal smells a profound influence on human behavior and perception. PMID:20603708

  10. Local statistics of retinal optic flow for self-motion through natural sceneries.

    PubMed

    Calow, Dirk; Lappe, Markus

    2007-12-01

    Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems.

  11. Future Vision - Emerging Technologies and Their Transformational Potential on the Energy Industry

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.

    2015-01-01

    Where will Digital Energy be in ten years? To look that far ahead, we need to broadly consider how artificial intelligence, robotics, big data, nanotechnology, internet-of-things and other rapidly evolving and interrelated technologies will shape mankind's future. A panel of innovative visionary leaders from inside and outside the energy industry will discuss the emerging technologies that will shape the future of industrial operations over the next decade.

  12. Anthro-Centric Multisensory Interface for Vision Augmentation/Substitution

    DTIC Science & Technology

    2011-02-01

    Alternatively, we have also implemented a touch screen mechanism that allows the user to feel the pixels under his or her fingertip via the tongue while...recognition with the optic nerve visual prosthesis . Artificial Organs, 27, 996 – 1004. Walcott, E. C., & Langdon, R. B. (2001). Short-term plasticity of...National Academy of Sciences, 102(4), 1181-1186. Weiland, J. D., Liu, W., & Humayun, M. S. (2005). Retinal prosthesis . Annual Review of Biomedical

  13. Look Sharp While Seeing Sharp

    NASA Technical Reports Server (NTRS)

    2006-01-01

    The two scientists James B. Stephens and Dr. Charles G. Miller were tasked with studying the harmful properties of light in space, as well as the artificial radiation produced during laser and welding work, for the purpose of creating an enhanced means of eye protection in industrial welding applications. While working to apply their space research to these terrestrial applications, Stephens and Miller became engrossed with previously discovered research showing evidence that the eyes of hawks, eagles, and other birds of prey contain unique oil droplets that actually protect them from intensely radiated light rays (blue, violet, ultraviolet) while allowing vision-enhancing light rays (red, orange, green) to pass through. These oil droplets absorb short wavelength light rays which, in turn, reduce glare and provide heightened color contrast and definition for optimal visual acuity. Accordingly, birds of prey possess the ability to distinguish their targeted prey in natural surroundings and from great distances. Pairing the findings from their initial studies with what they learned from the bird studies, the scientists devised a methodology to incorporate the light-filtering/vision-enhancing dual-action benefits into a filtering system, using light-filtering dyes and tiny particles of zinc oxide. (Zinc oxide, which absorbs ultraviolet light, is also found in sunscreen lotions that protect the skin from sunburn.)

  14. A new binocular approach to the treatment of amblyopia in adults well beyond the critical period of visual development.

    PubMed

    Hess, R F; Mansouri, B; Thompson, B

    2010-01-01

    The present treatments for amblyopia are predominantly monocular aiming to improve the vision in the amblyopic eye through either patching of the fellow fixing eye or visual training of the amblyopic eye. This approach is problematic, not least of which because it rarely results in establishment of binocular function. Recently it has shown that amblyopes possess binocular cortical mechanisms for both threshold and suprathreshold stimuli. We outline a novel procedure for measuring the extent to which the fixing eye suppresses the fellow amblyopic eye, rendering what is a structurally binocular system, functionally monocular. Here we show that prolonged periods of viewing (under the artificial conditions of stimuli of different contrast in each eye) during which information from the two eyes is combined leads to a strengthening of binocular vision in strabismic amblyopes and eventual combination of binocular information under natural viewing conditions (stimuli of the same contrast in each eye). Concomitant improvement in monocular acuity of the amblyopic eye occurs with this reduction in suppression and strengthening of binocular fusion. Furthermore, in a majority of patients tested, stereoscopic function is established. This provides the basis for a new treatment of amblyopia, one that is purely binocular and aimed at reducing suppression as a first step.

  15. Going Below Minimums: The Efficacy of Display Enhanced/Synthetic Vision Fusion for Go-Around Decisions during Non-Normal Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.

    2007-01-01

    The use of enhanced vision systems in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting approach and landing operations. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved enhanced flight vision system that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of synthetic vision systems and enhanced vision system technologies, focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under these newly adopted rules. Experimental results specific to flight crew response to non-normal events using the fused synthetic/enhanced vision system are presented.

  16. Implantation of ArtificialIris, a CustomFlex irisprosthesis, in a trauma patient with an Artisan lens

    PubMed Central

    Doroodgar, Farideh; Jabbarvand, Mahmoud; Niazi, Feizollah; Niazi, Sana; Sanginabadi, Azad

    2017-01-01

    Abstract Purpose: To evaluate probable complications of ArtificialIris implantation with iris fixated intraocular lens. Method: Development of photophobia, glare, and psychological strain during face-to-face communication in a 23-year-old man with a widespread traumatic iris defect terminate to make a decision for performing implantation of an ArtificialIris (Humanoptics, Erlangen, Germany) under the remnant iris without removing the patient's existing Artisan lens. Results: Without any intraoperative or postoperative complications, the patient's visual acuity increased by 1 line, the endothelial cell loss was comparable with the cell loss associated with standard cataract surgery, and the anterior-chamber depth and anterior-chamber anatomy did not change. At the final follow-up examination, the mean intraocular pressure did not differ from baseline, and we achieved high level of patient satisfaction and subjective vision improvement. We discuss the particular importance of considering the patient's expectations, the appropriate measurements, ways to perfect color evaluation, and the types of ArtificialIris products. Conclusion: The implantation of the ArtificialIris in patients with aphakic iris-supported lenses (ie, pre-existing Artisan lenses) is a feasible approach and a useful option for patients with thin irises and iris hypoplasia who are at risk of subluxation or the dislocation of the posterior-chamber intraocular lens (PCIOL), and also those with sclerally fixed PCIOLs. PMID:29137026

  17. A Haptic Guided Robotic System for Endoscope Positioning and Holding.

    PubMed

    Cabuk, Burak; Ceylan, Savas; Anik, Ihsan; Tugasaygi, Mehtap; Kizir, Selcuk

    2015-01-01

    To determine the feasibility, advantages, and disadvantages of using a robot for holding and maneuvering the endoscope in transnasal transsphenoidal surgery. The system used in this study was a Stewart Platform based robotic system that was developed by Kocaeli University Department of Mechatronics Engineering for positioning and holding of endoscope. After the first use on an artificial head model, the system was used on six fresh postmortem bodies that were provided by the Morgue Specialization Department of the Forensic Medicine Institute (Istanbul, Turkey). The setup required for robotic system was easy, the time for registration procedure and setup of the robot takes 15 minutes. The resistance was felt on haptic arm in case of contact or friction with adjacent tissues. The adaptation process was shorter with the mouse to manipulate the endoscope. The endoscopic transsphenoidal approach was achieved with the robotic system. The endoscope was guided to the sphenoid ostium with the help of the robotic arm. This robotic system can be used in endoscopic transsphenoidal surgery as an endoscope positioner and holder. The robot is able to change the position easily with the help of an assistant and prevents tremor, and provides a better field of vision for work.

  18. VLSI chips for vision-based vehicle guidance

    NASA Astrophysics Data System (ADS)

    Masaki, Ichiro

    1994-02-01

    Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.

  19. Chlorophyll derivatives enhance invertebrate red-light and ultraviolet phototaxis.

    PubMed

    Degl'Innocenti, Andrea; Rossi, Leonardo; Salvetti, Alessandra; Marino, Attilio; Meloni, Gabriella; Mazzolai, Barbara; Ciofani, Gianni

    2017-06-13

    Chlorophyll derivatives are known to enhance vision in vertebrates. They are thought to bind visual pigments (i.e., opsins apoproteins bound to retinal chromophores) directly within the retina. Consistent with previous findings in vertebrates, here we show that chlorin e 6 - a chlorophyll derivative - enhances photophobicity in a flatworm (Dugesia japonica), specifically when exposed to UV radiation (λ = 405 nm) or red light (λ = 660 nm). This is the first report of chlorophyll derivatives acting as modulators of invertebrate phototaxis, and in general the first account demonstrating that they can artificially alter animal response to light at a behavioral level. Our findings show that the interaction between chlorophyll derivatives and opsins virtually concerns the vast majority of bilaterian animals, and also occurs in visual systems based on rhabdomeric (rather than ciliary) opsins.

  20. [Artificial sight: recent developments].

    PubMed

    Zeitz, O; Keserü, M; Hornig, R; Richard, G

    2009-03-01

    The implantation of electronic retina stimulators appears to be a future possibility to restore vision, at least partially in patients with retinal degeneration. The idea of such visual prostheses is not new but due to the general technical progress it has become more likely that a functioning implant will be on the market soon. Visual prosthesis may be integrated in the visual system in various places. Thus there are subretinal and epiretinal implants, as well as implants that are connected directly to the optic nerve or the visual cortex. The epiretinal approach is the most promising at the moment, but the problem of appropriate modulation of the image information is unsolved so far. This will be necessary to provide a interpretable visual information to the brain. The present article summarises the concepts and includes some latest information from recent conferences.

  1. What can neuromorphic event-driven precise timing add to spike-based pattern recognition?

    PubMed

    Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad

    2015-03-01

    This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.

  2. Vision Systems with the Human in the Loop

    NASA Astrophysics Data System (ADS)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  3. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  4. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  5. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  6. V-Man Generation for 3-D Real Time Animation. Chapter 5

    NASA Technical Reports Server (NTRS)

    Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang

    2007-01-01

    The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.

  7. Photoreceptor spectral sensitivity of the compound eyes of black soldier fly (Hermetia illucens) informing the design of LED-based illumination to enhance indoor reproduction.

    PubMed

    Oonincx, D G A B; Volk, N; Diehl, J J E; van Loon, J J A; Belušič, G

    2016-12-01

    Mating in the black soldier fly (BSF) is a visually mediated behaviour that under natural conditions occurs in full sunlight. Artificial light conditions promoting mating by BSF were designed based on the spectral characteristics of the compound eye retina. Electrophysiological measurements revealed that BSF ommatidia contained UV-, blue- and green-sensitive photoreceptor cells, allowing trichromatic vision. An illumination system for indoor breeding based on UV, blue and green LEDs was designed and its efficiency was compared with illumination by fluorescent tubes which have been successfully used to sustain a BSF colony for five years. Illumination by LEDs and the fluorescent tubes yielded equal numbers of egg clutches, however, the LED illumination resulted in significantly more larvae. The possibilities to optimize the current LED illumination system to better approximate the skylight illuminant and potentially optimize the larval yield are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  9. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  10. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  11. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  12. Eyesight quality and Computer Vision Syndrome.

    PubMed

    Bogdănici, Camelia Margareta; Săndulache, Diana Elena; Nechita, Corina Andreea

    2017-01-01

    The aim of the study was to analyze the effects that gadgets have on eyesight quality. A prospective observational study was conducted from January to July 2016, on 60 people who were divided into two groups: Group 1 - 30 middle school pupils with a mean age of 11.9 ± 1.86 and Group 2 - 30 patients evaluated in the Ophthalmology Clinic, "Sf. Spiridon" Hospital, Iași, with a mean age of 21.36 ± 7.16 years. The clinical parameters observed were the following: visual acuity (VA), objective refraction, binocular vision (BV), fusional amplitude (FA), Schirmer's test. A questionnaire was also distributed, which contained 8 questions that highlighted the gadget's impact on the eyesight. The use of different gadgets, such as computer, laptops, mobile phones or other displays become part of our everyday life and people experience a variety of ocular symptoms or vision problems related to these. Computer Vision Syndrome (CVS) represents a group of visual and extraocular symptoms associated with sustained use of visual display terminals. Headache, blurred vision, and ocular congestion are the most frequent manifestations determined by the long time use of gadgets. Mobile phones and laptops are the most frequently used gadgets. People who use gadgets for a long time have a sustained effort for accommodation. A small amount of refractive errors (especially myopic shift) was objectively recorded by various studies on near work. Dry eye syndrome could also be identified, and an improvement of visual comfort could be observed after the instillation of artificial tears drops. Computer Vision Syndrome is still under-diagnosed, and people should be made aware of the bad effects the prolonged use of gadgets has on eyesight.

  13. Eyesight quality and Computer Vision Syndrome

    PubMed Central

    Bogdănici, Camelia Margareta; Săndulache, Diana Elena; Nechita, Corina Andreea

    2017-01-01

    The aim of the study was to analyze the effects that gadgets have on eyesight quality. A prospective observational study was conducted from January to July 2016, on 60 people who were divided into two groups: Group 1 – 30 middle school pupils with a mean age of 11.9 ± 1.86 and Group 2 – 30 patients evaluated in the Ophthalmology Clinic, “Sf. Spiridon” Hospital, Iași, with a mean age of 21.36 ± 7.16 years. The clinical parameters observed were the following: visual acuity (VA), objective refraction, binocular vision (BV), fusional amplitude (FA), Schirmer’s test. A questionnaire was also distributed, which contained 8 questions that highlighted the gadget’s impact on the eyesight. The use of different gadgets, such as computer, laptops, mobile phones or other displays become part of our everyday life and people experience a variety of ocular symptoms or vision problems related to these. Computer Vision Syndrome (CVS) represents a group of visual and extraocular symptoms associated with sustained use of visual display terminals. Headache, blurred vision, and ocular congestion are the most frequent manifestations determined by the long time use of gadgets. Mobile phones and laptops are the most frequently used gadgets. People who use gadgets for a long time have a sustained effort for accommodation. A small amount of refractive errors (especially myopic shift) was objectively recorded by various studies on near work. Dry eye syndrome could also be identified, and an improvement of visual comfort could be observed after the instillation of artificial tears drops. Computer Vision Syndrome is still under-diagnosed, and people should be made aware of the bad effects the prolonged use of gadgets has on eyesight. PMID:29450383

  14. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  15. Flight Test Evaluation of Situation Awareness Benefits of Integrated Synthetic Vision System Technology f or Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III

    2005-01-01

    Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.

  16. Artificial Intelligence (AI) Center of Excellence at the University of Pennsylvania

    DTIC Science & Technology

    1995-07-01

    that controls impact forces. Robust Location Estimation for MLR and Non-MLR Distributions (Dissertation Proposal) Gerda L. Kamberova MS-CIS-92-28...Bayesian Approach To Computer Vision Problems Gerda L. Kamberova MS-CIS-92-29 GRASP LAB 310 The object of our study is the Bayesian approach in...Estimation for MLR and Non-MLR Distributions (Dissertation) Gerda L. Kamberova MS-CIS-92-93 GRASP LAB 340 We study the problem of estimating an unknown

  17. Velocity and Structure Estimation of a Moving Object Using a Moving Monocular Camera

    DTIC Science & Technology

    2006-01-01

    map the Euclidean position of static landmarks or visual features in the environment . Recent applications of this technique include aerial...From Motion in a Piecewise Planar Environment ,” International Journal of Pattern Recognition and Artificial Intelligence, Vol. 2, No. 3, pp. 485-508...1988. [9] J. M. Ferryman, S. J. Maybank , and A. D. Worrall, “Visual Surveil- lance for Moving Vehicles,” Intl. Journal of Computer Vision, Vol. 37, No

  18. The role of visuohaptic experience in visually perceived depth.

    PubMed

    Ho, Yun-Xian; Serwe, Sascha; Trommershäuser, Julia; Maloney, Laurence T; Landy, Michael S

    2009-06-01

    Berkeley suggested that "touch educates vision," that is, haptic input may be used to calibrate visual cues to improve visual estimation of properties of the world. Here, we test whether haptic input may be used to "miseducate" vision, causing observers to rely more heavily on misleading visual cues. Human subjects compared the depth of two cylindrical bumps illuminated by light sources located at different positions relative to the surface. As in previous work using judgments of surface roughness, we find that observers judge bumps to have greater depth when the light source is located eccentric to the surface normal (i.e., when shadows are more salient). Following several sessions of visual judgments of depth, subjects then underwent visuohaptic training in which haptic feedback was artificially correlated with the "pseudocue" of shadow size and artificially decorrelated with disparity and texture. Although there were large individual differences, almost all observers demonstrated integration of haptic cues during visuohaptic training. For some observers, subsequent visual judgments of bump depth were unaffected by the training. However, for 5 of 12 observers, training significantly increased the weight given to pseudocues, causing subsequent visual estimates of shape to be less veridical. We conclude that haptic information can be used to reweight visual cues, putting more weight on misleading pseudocues, even when more trustworthy visual cues are available in the scene.

  19. MMW radar enhanced vision systems: the Helicopter Autonomous Landing System (HALS) and Radar-Enhanced Vision System (REVS) are rotary and fixed wing enhanced flight vision systems that enable safe flight operations in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Cross, Jack; Schneider, John; Cariani, Pete

    2013-05-01

    Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.

  20. Evaluation of 5 different labeled polymer immunohistochemical detection systems.

    PubMed

    Skaland, Ivar; Nordhus, Marit; Gudlaugsson, Einar; Klos, Jan; Kjellevold, Kjell H; Janssen, Emiel A M; Baak, Jan P A

    2010-01-01

    Immunohistochemical staining is important for diagnosis and therapeutic decision making but the results may vary when different detection systems are used. To analyze this, 5 different labeled polymer immunohistochemical detection systems, REAL EnVision, EnVision Flex, EnVision Flex+ (Dako, Glostrup, Denmark), NovoLink (Novocastra Laboratories Ltd, Newcastle Upon Tyne, UK) and UltraVision ONE (Thermo Fisher Scientific, Fremont, CA) were tested using 12 different, widely used mouse and rabbit primary antibodies, detecting nuclear, cytoplasmic, and membrane antigens. Serial sections of multitissue blocks containing 4% formaldehyde fixed paraffin embedded material were selected for their weak, moderate, and strong staining for each antibody. Specificity and sensitivity were evaluated by subjective scoring and digital image analysis. At optimal primary antibody dilution, digital image analysis showed that EnVision Flex+ was the most sensitive system (P < 0.005), with means of 8.3, 13.4, 20.2, and 41.8 gray scale values stronger staining than REAL EnVision, EnVision Flex, NovoLink, and UltraVision ONE, respectively. NovoLink was the second most sensitive system for mouse antibodies, but showed low sensitivity for rabbit antibodies. Due to low sensitivity, 2 cases with UltraVision ONE and 1 case with NovoLink stained false negatively. None of the detection systems showed any distinct false positivity, but UltraVision ONE and NovoLink consistently showed weak background staining both in negative controls and at optimal primary antibody dilution. We conclude that there are significant differences in sensitivity, specificity, costs, and total assay time in the immunohistochemical detection systems currently in use.

  1. Real Time Target Tracking Using Dedicated Vision Hardware

    NASA Astrophysics Data System (ADS)

    Kambies, Keith; Walsh, Peter

    1988-03-01

    This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.

  2. Knowledge Discovery, Integration and Communication for Extreme Weather and Flood Resilience Using Artificial Intelligence: Flood AI Alpha

    NASA Astrophysics Data System (ADS)

    Demir, I.; Sermet, M. Y.

    2016-12-01

    Nobody is immune from extreme events or natural hazards that can lead to large-scale consequences for the nation and public. One of the solutions to reduce the impacts of extreme events is to invest in improving resilience with the ability to better prepare, plan, recover, and adapt to disasters. The National Research Council (NRC) report discusses the topic of how to increase resilience to extreme events through a vision of resilient nation in the year 2030. The report highlights the importance of data, information, gaps and knowledge challenges that needs to be addressed, and suggests every individual to access the risk and vulnerability information to make their communities more resilient. This abstracts presents our project on developing a resilience framework for flooding to improve societal preparedness with objectives; (a) develop a generalized ontology for extreme events with primary focus on flooding; (b) develop a knowledge engine with voice recognition, artificial intelligence, natural language processing, and inference engine. The knowledge engine will utilize the flood ontology and concepts to connect user input to relevant knowledge discovery outputs on flooding; (c) develop a data acquisition and processing framework from existing environmental observations, forecast models, and social networks. The system will utilize the framework, capabilities and user base of the Iowa Flood Information System (IFIS) to populate and test the system; (d) develop a communication framework to support user interaction and delivery of information to users. The interaction and delivery channels will include voice and text input via web-based system (e.g. IFIS), agent-based bots (e.g. Microsoft Skype, Facebook Messenger), smartphone and augmented reality applications (e.g. smart assistant), and automated web workflows (e.g. IFTTT, CloudWork) to open the knowledge discovery for flooding to thousands of community extensible web workflows.

  3. Pyramidal neurovision architecture for vision machines

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1993-08-01

    The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.

  4. Computer retina that models the primate retina

    NASA Astrophysics Data System (ADS)

    Shah, Samir; Levine, Martin D.

    1994-06-01

    At the retinal level, the strategies utilized by biological visual systems allow them to outperform machine vision systems, serving to motivate the design of electronic or `smart' sensors based on similar principles. Design of such sensors in silicon first requires a model of retinal information processing which captures the essential features exhibited by biological retinas. In this paper, a simple retinal model is presented, which qualitatively accounts for the achromatic information processing in the primate cone system. The model exhibits many of the properties found in biological retina such as data reduction through nonuniform sampling, adaptation to a large dynamic range of illumination levels, variation of visual acuity with illumination level, and enhancement of spatio temporal contrast information. The model is validated by replicating experiments commonly performed by electrophysiologists on biological retinas and comparing the response of the computer retina to data from experiments in monkeys. In addition, the response of the model to synthetic images is shown. The experiments demonstrate that the model behaves in a manner qualitatively similar to biological retinas and thus may serve as a basis for the development of an `artificial retina.'

  5. [Health data].

    PubMed

    Polton, Dominique

    2018-05-01

    Healthcare is considered as one of the most promising areas where big data can be applied to foster innovation for the benefit of patients and of the whole system. Healthcare analytics have the potential to accelerate R&D, increase knowledge on diseases and risk factors, improve treatments, develop personalised medicine, help physicians with decision support systems… The access to data is also a driving force for patients' empowerment and for the democratic debate. However, there are also concerns about the societal, economic and ethic impacts of this wave of digitization and of the growing use of data, algorithms and artificial intelligence. Given the issues at stake, collecting and analysing data generated by health care systems is a strategic challenge in all countries; in that respect the French National System of Health Data (a national data warehouse linking data from several sources and giving a vision of the care pathways for the entire population, with a ten-year history) is an asset, but it has to be completed and enriched with data from electronic health records. © 2018 médecine/sciences – Inserm.

  6. Biologically inspired intelligent robots

    NASA Astrophysics Data System (ADS)

    Bar-Cohen, Yoseph; Breazeal, Cynthia

    2003-07-01

    Humans throughout history have always sought to mimic the appearance, mobility, functionality, intelligent operation, and thinking process of biological creatures. This field of biologically inspired technology, having the moniker biomimetics, has evolved from making static copies of human and animals in the form of statues to the emergence of robots that operate with realistic behavior. Imagine a person walking towards you where suddenly you notice something weird about him--he is not real but rather he is a robot. Your reaction would probably be "I can't believe it but this robot looks very real" just as you would react to an artificial flower that is a good imitation. You may even proceed and touch the robot to check if your assessment is correct but, as oppose to the flower case, the robot may be programmed to respond physical and verbally. This science fiction scenario could become a reality as the current trend continues in developing biologically inspired technologies. Technology evolution led to such fields as artificial muscles, artificial intelligence, and artificial vision as well as biomimetic capabilities in materials science, mechanics, electronics, computing science, information technology and many others. This paper will review the state of the art and challenges to biologically-inspired technologies and the role that EAP is expected to play as the technology evolves.

  7. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  8. The effects of simulated vision impairments on the cone of gaze.

    PubMed

    Hecht, Heiko; Hörichs, Jenny; Sheldon, Sarah; Quint, Jessilin; Bowers, Alex

    2015-10-01

    Detecting the gaze direction of others is critical for many social interactions. We explored factors that may make the perception of mutual gaze more difficult, including the degradation of the stimulus and simulated vision impairment. To what extent do these factors affect the complex assessment of mutual gaze? Using an interactive virtual head whose eye direction could be manipulated by the subject, we conducted two experiments to assess the effects of simulated vision impairments on mutual gaze. Healthy subjects had to demarcate the center and the edges of the cone of gaze-that is, the range of gaze directions that are accepted for mutual gaze. When vision was impaired by adding a semitransparent white contrast reduction mask to the display (Exp. 1), judgments became more variable and more influenced by the head direction (indicative of a compensation strategy). When refractive blur was added (Exp. 1), the gaze cone shrank from 12.9° (no blur) to 11.3° (3-diopter lens), which cannot be explained by a low-level process but might reflect a tightening of the criterion for mutual gaze as a response to the increased uncertainty. However, the overall effects of the impairments were relatively modest. Elderly subjects (Exp. 2) produced more variability but did not differ qualitatively from the younger subjects. In the face of artificial vision impairments, compensation mechanisms and criterion changes allow us to perform better in mutual gaze perception than would be predicted by a simple extrapolation from the losses in basic visual acuity and contrast sensitivity.

  9. Monovision techniques for telerobots

    NASA Technical Reports Server (NTRS)

    Goode, P. W.; Carnils, K.

    1987-01-01

    The primary task of the vision sensor in a telerobotic system is to provide information about the position of the system's effector relative to objects of interest in its environment. The subtasks required to perform the primary task include image segmentation, object recognition, and object location and orientation in some coordinate system. The accomplishment of the vision task requires the appropriate processing tools and the system methodology to effectively apply the tools to the subtasks. The functional structure of the telerobotic vision system used in the Langley Research Center's Intelligent Systems Research Laboratory is discussed as well as two monovision techniques for accomplishing the vision subtasks.

  10. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  11. Personal mobility and manipulation using robotics, artificial intelligence and advanced control.

    PubMed

    Cooper, Rory A; Ding, Dan; Grindle, Garrett G; Wang, Hongwu

    2007-01-01

    Recent advancements of technologies, including computation, robotics, machine learning, communication, and miniaturization technologies, bring us closer to futuristic visions of compassionate intelligent devices. The missing element is a basic understanding of how to relate human functions (physiological, physical, and cognitive) to the design of intelligent devices and systems that aid and interact with people. Our stakeholder and clinician consultants identified a number of mobility barriers that have been intransigent to traditional approaches. The most important physical obstacles are stairs, steps, curbs, doorways (doors), rough/uneven surfaces, weather hazards (snow, ice), crowded/cluttered spaces, and confined spaces. Focus group participants suggested a number of ways to make interaction simpler, including natural language interfaces such as the ability to say "I want a drink", a library of high level commands (open a door, park the wheelchair, ...), and a touchscreen interface with images so the user could point and use other gestures.

  12. Thin-slice vision: inference of confidence measure from perceptual video quality

    NASA Astrophysics Data System (ADS)

    Hameed, Abdul; Balas, Benjamin; Dai, Rui

    2016-11-01

    There has been considerable research on thin-slice judgments, but no study has demonstrated the predictive validity of confidence measures when assessors watch videos acquired from communication systems, in which the perceptual quality of videos could be degraded by limited bandwidth and unreliable network conditions. This paper studies the relationship between high-level thin-slice judgments of human behavior and factors that contribute to perceptual video quality. Based on a large number of subjective test results, it has been found that the confidence of a single individual present in all the videos, called speaker's confidence (SC), could be predicted by a list of features that contribute to perceptual video quality. Two prediction models, one based on artificial neural network and the other based on a decision tree, were built to predict SC. Experimental results have shown that both prediction models can result in high correlation measures.

  13. Reading from Scratch - A Vision-System for Reading Data on Micro-structured Surfaces

    NASA Astrophysics Data System (ADS)

    Dragon, Ralf; Becker, Christian; Rosenhahn, Bodo; Ostermann, Jörn

    Labeling and marking industrial manufactured objects gets increasingly important nowadays because of novel material properties and plagiarism. As part of the Collaborative Research Center 653 which investigates micro-structured metallic surfaces for inherent mechanical data storage, we research into a stable and reliable optical readout of the written data. Since this comprises a qualitative surface reconstruction, we use directed illumination to make the micro structures visible. Then we apply a spectral analysis to obtain image partitioning and perform signal tracking enhanced by a customized Hidden Markov Model. In this paper, we derive the algorithms used and demonstrate reading data from a surface with 1.6kbit/cm2 from a micro-structured groove which varies by only 3μ m in depth (thus a “scratch”). We demonstrate the system’s robustness with experiments with real and artificially-rendered surfaces.

  14. From Phonomecanocardiography to Phonocardiography computer aided

    NASA Astrophysics Data System (ADS)

    Granados, J.; Tavera, F.; López, G.; Velázquez, J. M.; Hernández, R. T.; López, G. A.

    2017-01-01

    Due to lack of training doctors to identify many of the disorders in the heart by conventional listening, it is necessary to add an objective and methodological analysis to support this technique. In order to obtain information of the performance of the heart to be able to diagnose heart disease through a simple, cost-effective procedure by means of a data acquisition system, we have obtained Phonocardiograms (PCG), which are images of the sounds emitted by the heart. A program of acoustic, visual and artificial vision recognition was elaborated to interpret them. Based on the results of previous research of cardiologists a code of interpretation of PCG and associated diseases was elaborated. Also a site, within the university campus, of experimental sampling of cardiac data was created. Phonocardiography computer-aided is a viable and low cost procedure which provides additional medical information to make a diagnosis of complex heart diseases. We show some previous results.

  15. A new approach based on Machine Learning for predicting corneal curvature (K1) and astigmatism in patients with keratoconus after intracorneal ring implantation.

    PubMed

    Valdés-Mas, M A; Martín-Guerrero, J D; Rupérez, M J; Pastor, F; Dualde, C; Monserrat, C; Peris-Martínez, C

    2014-08-01

    Keratoconus (KC) is the most common type of corneal ectasia. A corneal transplantation was the treatment of choice until the last decade. However, intra-corneal ring implantation has become more and more common, and it is commonly used to treat KC thus avoiding a corneal transplantation. This work proposes a new approach based on Machine Learning to predict the vision gain of KC patients after ring implantation. That vision gain is assessed by means of the corneal curvature and the astigmatism. Different models were proposed; the best results were achieved by an artificial neural network based on the Multilayer Perceptron. The error provided by the best model was 0.97D of corneal curvature and 0.93D of astigmatism. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Effect of ecological viewing conditions on the Ames' distorted room illusion.

    PubMed

    Gehringer, W L; Engel, E

    1986-05-01

    Ecological theory asserts that the Ames' distorted room illusion (DRI) occurs as a result of the artificial restriction of information pickup. According to Gibson (1966, 1979), the illusion is eliminated when binocular vision and/or head movement are allowed. In Experiment 1, to measure the DRI, we used a size-matching technique employing discs placed within an Ames' distorted room. One hundred forty-four subjects viewed the distorted room or a control apparatus under four different viewing conditions (i.e., restricted or unrestricted head movement), using monocular and binocular vision. In Experiment 2, subjects viewed binocularly and were instructed to move freely while making judgments. Overall, the main findings of this study were that the DRI decreased with increases in viewing access and that the DRI persisted under all viewing conditions. The persistence of the illusion was felt to contradict Gibson's position.

  17. Computational Analysis of Behavior.

    PubMed

    Egnor, S E Roian; Branson, Kristin

    2016-07-08

    In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.

  18. TUTORIAL: Development of a cortical visual neuroprosthesis for the blind: the relevance of neuroplasticity

    NASA Astrophysics Data System (ADS)

    Fernández, E.; Pelayo, F.; Romero, S.; Bongard, M.; Marin, C.; Alfaro, A.; Merabet, L.

    2005-12-01

    Clinical applications such as artificial vision require extraordinary, diverse, lengthy and intimate collaborations among basic scientists, engineers and clinicians. In this review, we present the state of research on a visual neuroprosthesis designed to interface with the occipital visual cortex as a means through which a limited, but useful, visual sense could be restored in profoundly blind individuals. We review the most important physiological principles regarding this neuroprosthetic approach and emphasize the role of neural plasticity in order to achieve desired behavioral outcomes. While full restoration of fine detailed vision with current technology is unlikely in the immediate near future, the discrimination of shapes and the localization of objects should be possible allowing blind subjects to navigate in a unfamiliar environment and perhaps even to read enlarged text. Continued research and development in neuroprosthesis technology will likely result in a substantial improvement in the quality of life of blind and visually impaired individuals.

  19. Detecting Motion from a Moving Platform; Phase 3: Unification of Control and Sensing for More Advanced Situational Awareness

    DTIC Science & Technology

    2011-11-01

    RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica

  20. A Target-Less Vision-Based Displacement Sensor Based on Image Convex Hull Optimization for Measuring the Dynamic Response of Building Structures.

    PubMed

    Choi, Insub; Kim, JunHee; Kim, Donghyun

    2016-12-08

    Existing vision-based displacement sensors (VDSs) extract displacement data through changes in the movement of a target that is identified within the image using natural or artificial structure markers. A target-less vision-based displacement sensor (hereafter called "TVDS") is proposed. It can extract displacement data without targets, which then serve as feature points in the image of the structure. The TVDS can extract and track the feature points without the target in the image through image convex hull optimization, which is done to adjust the threshold values and to optimize them so that they can have the same convex hull in every image frame and so that the center of the convex hull is the feature point. In addition, the pixel coordinates of the feature point can be converted to physical coordinates through a scaling factor map calculated based on the distance, angle, and focal length between the camera and target. The accuracy of the proposed scaling factor map was verified through an experiment in which the diameter of a circular marker was estimated. A white-noise excitation test was conducted, and the reliability of the displacement data obtained from the TVDS was analyzed by comparing the displacement data of the structure measured with a laser displacement sensor (LDS). The dynamic characteristics of the structure, such as the mode shape and natural frequency, were extracted using the obtained displacement data, and were compared with the numerical analysis results. TVDS yielded highly reliable displacement data and highly accurate dynamic characteristics, such as the natural frequency and mode shape of the structure. As the proposed TVDS can easily extract the displacement data even without artificial or natural markers, it has the advantage of extracting displacement data from any portion of the structure in the image.

  1. Vehicle-based vision sensors for intelligent highway systems

    NASA Astrophysics Data System (ADS)

    Masaki, Ichiro

    1989-09-01

    This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.

  2. Perception-based synthetic cueing for night vision device rotorcraft hover operations

    NASA Astrophysics Data System (ADS)

    Bachelder, Edward N.; McRuer, Duane

    2002-08-01

    Helicopter flight using night-vision devices (NVDs) is difficult to perform, as evidenced by the high accident rate associated with NVD flight compared to day operation. The approach proposed in this paper is to augment the NVD image with synthetic cueing, whereby the cues would emulate position and motion and appear to be actually occurring in physical space on which they are overlaid. Synthetic cues allow for selective enhancement of perceptual state gains to match the task requirements. A hover cue set was developed based on an analogue of a physical target used in a flight handling qualities tracking task, a perceptual task analysis for hover, and fundamentals of human spatial perception. The display was implemented on a simulation environment, constructed using a virtual reality device, an ultrasound head-tracker, and a fixed-base helicopter simulator. Seven highly trained helicopter pilots were used as experimental subjects and tasked to maintain hover in the presence of aircraft positional disturbances while viewing a synthesized NVD environment and the experimental hover cues. Significant performance improvements were observed when using synthetic cue augmentation. This paper demonstrates that artificial magnification of perceptual states through synthetic cueing can be an effective method of improving night-vision helicopter hover operations.

  3. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  4. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  5. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    PubMed

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  6. Machine vision systems using machine learning for industrial product inspection

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  7. A Survey on Ambient Intelligence in Health Care

    PubMed Central

    Acampora, Giovanni; Cook, Diane J.; Rashidi, Parisa; Vasilakos, Athanasios V.

    2013-01-01

    Ambient Intelligence (AmI) is a new paradigm in information technology aimed at empowering people’s capabilities by the means of digital environments that are sensitive, adaptive, and responsive to human needs, habits, gestures, and emotions. This futuristic vision of daily environment will enable innovative human-machine interactions characterized by pervasive, unobtrusive and anticipatory communications. Such innovative interaction paradigms make ambient intelligence technology a suitable candidate for developing various real life solutions, including in the health care domain. This survey will discuss the emergence of ambient intelligence (AmI) techniques in the health care domain, in order to provide the research community with the necessary background. We will examine the infrastructure and technology required for achieving the vision of ambient intelligence, such as smart environments and wearable medical devices. We will summarize of the state of the art artificial intelligence methodologies used for developing AmI system in the health care domain, including various learning techniques (for learning from user interaction), reasoning techniques (for reasoning about users’ goals and intensions) and planning techniques (for planning activities and interactions). We will also discuss how AmI technology might support people affected by various physical or mental disabilities or chronic disease. Finally, we will point to some of the successful case studies in the area and we will look at the current and future challenges to draw upon the possible future research paths. PMID:24431472

  8. A comparison of algorithms for inference and learning in probabilistic graphical models.

    PubMed

    Frey, Brendan J; Jojic, Nebojsa

    2005-09-01

    Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.

  9. A Survey on Ambient Intelligence in Health Care.

    PubMed

    Acampora, Giovanni; Cook, Diane J; Rashidi, Parisa; Vasilakos, Athanasios V

    2013-12-01

    Ambient Intelligence (AmI) is a new paradigm in information technology aimed at empowering people's capabilities by the means of digital environments that are sensitive, adaptive, and responsive to human needs, habits, gestures, and emotions. This futuristic vision of daily environment will enable innovative human-machine interactions characterized by pervasive, unobtrusive and anticipatory communications. Such innovative interaction paradigms make ambient intelligence technology a suitable candidate for developing various real life solutions, including in the health care domain. This survey will discuss the emergence of ambient intelligence (AmI) techniques in the health care domain, in order to provide the research community with the necessary background. We will examine the infrastructure and technology required for achieving the vision of ambient intelligence, such as smart environments and wearable medical devices. We will summarize of the state of the art artificial intelligence methodologies used for developing AmI system in the health care domain, including various learning techniques (for learning from user interaction), reasoning techniques (for reasoning about users' goals and intensions) and planning techniques (for planning activities and interactions). We will also discuss how AmI technology might support people affected by various physical or mental disabilities or chronic disease. Finally, we will point to some of the successful case studies in the area and we will look at the current and future challenges to draw upon the possible future research paths.

  10. Application of aircraft navigation sensors to enhanced vision systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.

    1993-01-01

    In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.

  11. [Management of post-traumatic aphakia and aniridia: Retrospective study of 17 patients undergoing scleral-sutured artificial iris intraocular lens implantation. Management of aphakia-aniridia with scleral-sutured artificial iris intraocular lenses].

    PubMed

    Villemont, A-S; Kocaba, V; Janin-Manificat, H; Abouaf, L; Poli, M; Marty, A-S; Rabilloud, M; Fleury, J; Burillon, C

    2017-09-01

    To evaluate the long-term outcomes of artificial iris intraocular lenses sutured to the sclera for managing traumatic aphakia and aniridia. All consecutive cases receiving a Morcher ® combination implant from June 2008 to February 2016 in Edouard-Herriot Hospital (Lyon, France) were included in this single-center retrospective study. Visual acuity, subjective degree of glare, quality of life and surgical complications were evaluated. Seventeen eyes of 17 patients were included, among which 82% were male. The mean age was 42 years. The injuries consisted of 23.5% contusion and 70.5% open globe injuries, of which 41% were globe ruptures. There was one postoperative case. A penetrating keratoplasty was performed at the same time for eight eyes. The mean follow-up was 32 months. Best-corrected visual acuity improved in 41.2%, remained the same in 17.6% and decreased in 41.2% of our cases. Distance vision averaged 1±0.25 line better and near vision 2.2±0.32 lines better when visual acuity was quantifiable before surgery. Glare improved in 80% of patients and remained stable in 20%, decreasing on average from 3.3/5 [min. 3-max. 4; SD: 0.48] before surgery to 1.9/5 [min. 0-max. 4; SD: 1.197] after surgery. Regarding the esthetic results, 78% of the patients declared themselves reasonably to very satisfied; 57% reported no limitation of activities of daily living, and 43% reported mild limitation. Ocular hypertension and glaucoma, found in 40% of eyes, were the main postoperative complications. Implantation of prosthetic iris device combined with an intraocular lens appears to be safe and effective in reducing glare disability and improving visual acuity. Close, long-term monitoring is essential for the success of this surgery. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  12. FLORA™: Phase I development of a functional vision assessment for prosthetic vision users

    PubMed Central

    Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy

    2014-01-01

    Background Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population in order to evaluate the impact of new vision restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional vision ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. Methods The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional vision tasks for observation of performance, and a case narrative summary. Results were analyzed to determine whether the interview questions and functional vision tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, twenty-six subjects were assessed with the FLORA. Seven different evaluators administered the assessment. Results All 14 interview questions were asked. All 35 functional vision tasks were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options -- impossible (33%), difficult (23%), moderate (24%) and easy (19%) -- were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with vision only occurring 75% on average with the System ON, and 29% with the System OFF. Conclusion The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as a functional vision and well-being assessment tool. PMID:25675964

  13. Night vision: changing the way we drive

    NASA Astrophysics Data System (ADS)

    Klapper, Stuart H.; Kyle, Robert J. S.; Nicklin, Robert L.; Kormos, Alexander L.

    2001-03-01

    A revolutionary new Night Vision System has been designed to help drivers see well beyond their headlights. From luxury automobiles to heavy trucks, Night Vision is helping drivers see better, see further, and react sooner. This paper describes how Night Vision Systems are being used in transportation and their viability for the future. It describes recent improvements to the system currently in the second year of production. It also addresses consumer education and awareness, cost reduction, product reliability, market expansion and future improvements.

  14. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  15. Defect detection and classification of machined surfaces under multiple illuminant directions

    NASA Astrophysics Data System (ADS)

    Liao, Yi; Weng, Xin; Swonger, C. W.; Ni, Jun

    2010-08-01

    Continuous improvement of product quality is crucial to the successful and competitive automotive manufacturing industry in the 21st century. The presence of surface porosity located on flat machined surfaces such as cylinder heads/blocks and transmission cases may allow leaks of coolant, oil, or combustion gas between critical mating surfaces, thus causing damage to the engine or transmission. Therefore 100% inline inspection plays an important role for improving product quality. Although the techniques of image processing and machine vision have been applied to machined surface inspection and well improved in the past 20 years, in today's automotive industry, surface porosity inspection is still done by skilled humans, which is costly, tedious, time consuming and not capable of reliably detecting small defects. In our study, an automated defect detection and classification system for flat machined surfaces has been designed and constructed. In this paper, the importance of the illuminant direction in a machine vision system was first emphasized and then the surface defect inspection system under multiple directional illuminations was designed and constructed. After that, image processing algorithms were developed to realize 5 types of 2D or 3D surface defects (pore, 2D blemish, residue dirt, scratch, and gouge) detection and classification. The steps of image processing include: (1) image acquisition and contrast enhancement (2) defect segmentation and feature extraction (3) defect classification. An artificial machined surface and an actual automotive part: cylinder head surface were tested and, as a result, microscopic surface defects can be accurately detected and assigned to a surface defect class. The cycle time of this system can be sufficiently fast that implementation of 100% inline inspection is feasible. The field of view of this system is 150mm×225mm and the surfaces larger than the field of view can be stitched together in software.

  16. Machine Vision Systems for Processing Hardwood Lumber and Logs

    Treesearch

    Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline

    1992-01-01

    Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...

  17. Machine vision system for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.

  18. Vision Algorithms to Determine Shape and Distance for Manipulation of Unmodeled Objects

    NASA Technical Reports Server (NTRS)

    Montes, Leticia; Bowers, David; Lumia, Ron

    1998-01-01

    This paper discusses the development of a robotic system for general use in an unstructured environment. This is illustrated through pick and place of randomly positioned, un-modeled objects. There are many applications for this project, including rock collection for the Mars Surveyor Program. This system is demonstrated with a Puma560 robot, Barrett hand, Cognex vision system, and Cimetrix simulation and control, all running on a PC. The demonstration consists of two processes: vision system and robotics. The vision system determines the size and location of the unknown objects. The robotics part consists of moving the robot to the object, configuring the hand based on the information from the vision system, then performing the pick/place operation. This work enhances and is a part of the Low Cost Virtual Collaborative Environment which provides remote simulation and control of equipment.

  19. Applications of artificial neural networks (ANNs) in food science.

    PubMed

    Huang, Yiqun; Kangas, Lars J; Rasco, Barbara A

    2007-01-01

    Artificial neural networks (ANNs) have been applied in almost every aspect of food science over the past two decades, although most applications are in the development stage. ANNs are useful tools for food safety and quality analyses, which include modeling of microbial growth and from this predicting food safety, interpreting spectroscopic data, and predicting physical, chemical, functional and sensory properties of various food products during processing and distribution. ANNs hold a great deal of promise for modeling complex tasks in process control and simulation and in applications of machine perception including machine vision and electronic nose for food safety and quality control. This review discusses the basic theory of the ANN technology and its applications in food science, providing food scientists and the research community an overview of the current research and future trend of the applications of ANN technology in the field.

  20. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    PubMed

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  1. Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-01-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  2. Mixed reality simulation of rasping procedure in artificial cervical disc replacement (ACDR) surgery.

    PubMed

    Halic, Tansel; Kockara, Sinan; Bayrak, Coskun; Rowe, Richard

    2010-10-07

    Until quite recently spinal disorder problems in the U.S. have been operated by fusing cervical vertebrae instead of replacement of the cervical disc with an artificial disc. Cervical disc replacement is a recently approved procedure in the U.S. It is one of the most challenging surgical procedures in the medical field due to the deficiencies in available diagnostic tools and insufficient number of surgical practices For physicians and surgical instrument developers, it is critical to understand how to successfully deploy the new artificial disc replacement systems. Without proper understanding and practice of the deployment procedure, it is possible to injure the vertebral body. Mixed reality (MR) and virtual reality (VR) surgical simulators are becoming an indispensable part of physicians' training, since they offer a risk free training environment. In this study, MR simulation framework and intricacies involved in the development of a MR simulator for the rasping procedure in artificial cervical disc replacement (ACDR) surgery are investigated. The major components that make up the MR surgical simulator with motion tracking system are addressed. A mixed reality surgical simulator that targets rasping procedure in the artificial cervical disc replacement surgery with a VICON motion tracking system was developed. There were several challenges in the development of MR surgical simulator. First, the assembly of different hardware components for surgical simulation development that involves knowledge and application of interdisciplinary fields such as signal processing, computer vision and graphics, along with the design and placements of sensors etc . Second challenge was the creation of a physically correct model of the rasping procedure in order to attain critical forces. This challenge was handled with finite element modeling. The third challenge was minimization of error in mapping movements of an actor in real model to a virtual model in a process called registration. This issue was overcome by a two-way (virtual object to real domain and real domain to virtual object) semi-automatic registration method. The applicability of the VICON MR setting for the ACDR surgical simulator is demonstrated. The main stream problems encountered in MR surgical simulator development are addressed. First, an effective environment for MR surgical development is constructed. Second, the strain and the stress intensities and critical forces are simulated under the various rasp instrument loadings with impacts that are applied on intervertebral surfaces of the anterior vertebrae throughout the rasping procedure. Third, two approaches are introduced to solve the registration problem in MR setting. Results show that our system creates an effective environment for surgical simulation development and solves tedious and time-consuming registration problems caused by misalignments. Further, the MR ACDR surgery simulator was tested by 5 different physicians who found that the MR simulator is effective enough to teach the anatomical details of cervical discs and to grasp the basics of the ACDR surgery and rasping procedure.

  3. Mixed reality simulation of rasping procedure in artificial cervical disc replacement (ACDR) surgery

    PubMed Central

    2010-01-01

    Background Until quite recently spinal disorder problems in the U.S. have been operated by fusing cervical vertebrae instead of replacement of the cervical disc with an artificial disc. Cervical disc replacement is a recently approved procedure in the U.S. It is one of the most challenging surgical procedures in the medical field due to the deficiencies in available diagnostic tools and insufficient number of surgical practices For physicians and surgical instrument developers, it is critical to understand how to successfully deploy the new artificial disc replacement systems. Without proper understanding and practice of the deployment procedure, it is possible to injure the vertebral body. Mixed reality (MR) and virtual reality (VR) surgical simulators are becoming an indispensable part of physicians’ training, since they offer a risk free training environment. In this study, MR simulation framework and intricacies involved in the development of a MR simulator for the rasping procedure in artificial cervical disc replacement (ACDR) surgery are investigated. The major components that make up the MR surgical simulator with motion tracking system are addressed. Findings A mixed reality surgical simulator that targets rasping procedure in the artificial cervical disc replacement surgery with a VICON motion tracking system was developed. There were several challenges in the development of MR surgical simulator. First, the assembly of different hardware components for surgical simulation development that involves knowledge and application of interdisciplinary fields such as signal processing, computer vision and graphics, along with the design and placements of sensors etc . Second challenge was the creation of a physically correct model of the rasping procedure in order to attain critical forces. This challenge was handled with finite element modeling. The third challenge was minimization of error in mapping movements of an actor in real model to a virtual model in a process called registration. This issue was overcome by a two-way (virtual object to real domain and real domain to virtual object) semi-automatic registration method. Conclusions The applicability of the VICON MR setting for the ACDR surgical simulator is demonstrated. The main stream problems encountered in MR surgical simulator development are addressed. First, an effective environment for MR surgical development is constructed. Second, the strain and the stress intensities and critical forces are simulated under the various rasp instrument loadings with impacts that are applied on intervertebral surfaces of the anterior vertebrae throughout the rasping procedure. Third, two approaches are introduced to solve the registration problem in MR setting. Results show that our system creates an effective environment for surgical simulation development and solves tedious and time-consuming registration problems caused by misalignments. Further, the MR ACDR surgery simulator was tested by 5 different physicians who found that the MR simulator is effective enough to teach the anatomical details of cervical discs and to grasp the basics of the ACDR surgery and rasping procedure PMID:20946594

  4. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  5. Learning viewpoint invariant object representations using a temporal coherence principle.

    PubMed

    Einhäuser, Wolfgang; Hipp, Jörg; Eggert, Julian; Körner, Edgar; König, Peter

    2005-07-01

    Invariant object recognition is arguably one of the major challenges for contemporary machine vision systems. In contrast, the mammalian visual system performs this task virtually effortlessly. How can we exploit our knowledge on the biological system to improve artificial systems? Our understanding of the mammalian early visual system has been augmented by the discovery that general coding principles could explain many aspects of neuronal response properties. How can such schemes be transferred to system level performance? In the present study we train cells on a particular variant of the general principle of temporal coherence, the "stability" objective. These cells are trained on unlabeled real-world images without a teaching signal. We show that after training, the cells form a representation that is largely independent of the viewpoint from which the stimulus is looked at. This finding includes generalization to previously unseen viewpoints. The achieved representation is better suited for view-point invariant object classification than the cells' input patterns. This property to facilitate view-point invariant classification is maintained even if training and classification take place in the presence of an--also unlabeled--distractor object. In summary, here we show that unsupervised learning using a general coding principle facilitates the classification of real-world objects, that are not segmented from the background and undergo complex, non-isomorphic, transformations.

  6. Machine Vision Giving Eyes to Robots. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  7. AI And Early Vision - Part II

    NASA Astrophysics Data System (ADS)

    Julesz, Bela

    1989-08-01

    A quarter of a century ago I introduced two paradigms into psychology which in the intervening years have had a direct impact on the psychobiology of early vision and an indirect one on artificial intelligence (AI or machine vision). The first, the computer-generated random-dot stereogram (RDS) paradigm (Julesz, 1960) at its very inception posed a strategic question both for AI and neurophysiology. The finding that stereoscopic depth perception (stereopsis) is possible without the many enigmatic cues of monocular form recognition - as assumed previously - demonstrated that stereopsis with its basic problem of finding matches between corresponding random aggregates of dots in the left and right visual fields became ripe for modeling. Indeed, the binocular matching problem of stereopsis opened up an entire field of study, eventually leading to the computational models of David Marr (1982) and his coworkers. The fusion of RDS had an even greater impact on neurophysiologists - including Hubel and Wiesel (1962) - who realized that stereopsis must occur at an early stage, and can be studied easier than form perception. This insight recently culminated in the studies by Gian Poggio (1984) who found binocular-disparity - tuned neurons in the input stage to the visual cortex (layer IVB in V1) in the monkey that were selectively triggered by dynamic RDS. Thus the first paradigm led to a strategic insight: that with stereoscopic vision there is no camouflage, and as such was advantageous for our primate ancestors to evolve the cortical machinery of stereoscopic vision to capture camouflaged prey (insects) at a standstill. Amazingly, although stereopsis evolved relatively late in primates, it captured the very input stages of the visual cortex. (For a detailed review, see Julesz, 1986a)

  8. 3-D Signal Processing in a Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  9. Intensity measurement of automotive headlamps using a photometric vision system

    NASA Astrophysics Data System (ADS)

    Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.

    1996-01-01

    Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.

  10. Towards photorealistic and immersive virtual-reality environments for simulated prosthetic vision: integrating recent breakthroughs in consumer hardware and software.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J

    2014-01-01

    Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.

  11. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  12. Self calibration of the stereo vision system of the Chang'e-3 lunar rover based on the bundle block adjustment

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan

    2017-06-01

    The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the stereo vision system of the Chang'e-3 lunar rover on the moon.

  13. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    PubMed

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  14. An embedded vision system for an unmanned four-rotor helicopter

    NASA Astrophysics Data System (ADS)

    Lillywhite, Kirt; Lee, Dah-Jye; Tippetts, Beau; Fowers, Spencer; Dennis, Aaron; Nelson, Brent; Archibald, James

    2006-10-01

    In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.

  15. Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft

    DTIC Science & Technology

    2017-06-01

    International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing

  16. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  17. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  18. NOAA's operational path forward: Developing the Coyote UASonde

    NASA Astrophysics Data System (ADS)

    Cione, J.; Twining, K.; Silah, M.; Brescia, T.; Kalina, E.; Farber, A.; Troudt, C.; Ghanooni, A.; Baker, B.; Dumas, E. J.; Hock, T. F.; Smith, J.; French, J.; Fairall, C. W.; deBoer, G.; Bland, G.

    2016-12-01

    Since 2009, NOAA has shown an interest in using the air-deployed Coyote Unmanned Aircraft System (UAS) for low-altitude hurricane reconnaissance. In September of 2014, NOAA conducted two successful missions into Hurricane Edouard using this innovative observing tool. Since then, NOAA has continued to invest time and resources into the Coyote platform. These efforts include plans to release up to 7 additional Coyote UAS into tropical cyclones using NOAA's P-3 Hurricane Hunter manned aircraft in 2016. A longer-term goal for this multi-institutional partnership will be to modify the existing UAS design such that the next generation platform will be capable of conducting routine observations in direct support of a wide array of NOAA operations that extend beyond hurricane surveillance. The vision for this potentially transformative platform, dubbed the Coyote UASonde, will be to heavily leverage NOAA's existing capabilities, incorporate significant upgrades to the existing payload and employ an expert navigation and data communication system that utilizes artificial intelligence. A brief summary of Coyote successes to date as well as a future roadmap that leads NOAA towards an operationally-viable Coyote UASonde will be presented.

  19. Study on road sign recognition in LabVIEW

    NASA Astrophysics Data System (ADS)

    Panoiu, M.; Rat, C. L.; Panoiu, C.

    2016-02-01

    Road and traffic sign identification is a field of study that can be used to aid the development of in-car advisory systems. It uses computer vision and artificial intelligence to extract the road signs from outdoor images acquired by a camera in uncontrolled lighting conditions where they may be occluded by other objects, or may suffer from problems such as color fading, disorientation, variations in shape and size, etc. An automatic means of identifying traffic signs, in these conditions, can make a significant contribution to develop an Intelligent Transport Systems (ITS) that continuously monitors the driver, the vehicle, and the road. Road and traffic signs are characterized by a number of features which make them recognizable from the environment. Road signs are located in standard positions and have standard shapes, standard colors, and known pictograms. These characteristics make them suitable for image identification. Traffic sign identification covers two problems: traffic sign detection and traffic sign recognition. Traffic sign detection is meant for the accurate localization of traffic signs in the image space, while traffic sign recognition handles the labeling of such detections into specific traffic sign types or subcategories [1].

  20. Robust and efficient vision system for group of cooperating mobile robots with application to soccer robots.

    PubMed

    Klancar, Gregor; Kristan, Matej; Kovacic, Stanislav; Orqueda, Omar

    2004-07-01

    In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.

  1. Latency in Visionic Systems: Test Methods and Requirements

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  2. Function-based design process for an intelligent ground vehicle vision system

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.

    2010-10-01

    An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.

  3. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  4. GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa

    2004-01-01

    The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.

  5. Audible vision for the blind and visually impaired in indoor open spaces.

    PubMed

    Yu, Xunyi; Ganz, Aura

    2012-01-01

    In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.

  6. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  7. An improved clustering algorithm based on reverse learning in intelligent transportation

    NASA Astrophysics Data System (ADS)

    Qiu, Guoqing; Kou, Qianqian; Niu, Ting

    2017-05-01

    With the development of artificial intelligence and data mining technology, big data has gradually entered people's field of vision. In the process of dealing with large data, clustering is an important processing method. By introducing the reverse learning method in the clustering process of PAM clustering algorithm, to further improve the limitations of one-time clustering in unsupervised clustering learning, and increase the diversity of clustering clusters, so as to improve the quality of clustering. The algorithm analysis and experimental results show that the algorithm is feasible.

  8. Science of photobiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, K.C.

    1974-01-01

    Formation of the American Society for Photobiology for meeting the needs of investigators studying the effects of light on man and other organisms is described. The scientific program of the first meeting of the society included discussions of the followiug topics: detrimental effects of excessive exposure to sunlight and artificial uv radiation; repair of uv-induced damage to cells; the roles of light in the human environnnent; photochemistry in photobiology; effects of near uv light; photosynthesis; bioluminescence; light perception without eyes; chronobiology; the ultraviolet world of insects; vision; and solar energy conversion. (HLW)

  9. The Next Wave: Humans, Computers, and Redefining Reality

    NASA Technical Reports Server (NTRS)

    Little, William

    2018-01-01

    The Augmented/Virtual Reality (AVR) Lab at KSC is dedicated to " exploration into the growing computer fields of Extended Reality and the Natural User Interface (it is) a proving ground for new technologies that can be integrated into future NASA projects and programs." The topics of Human Computer Interface, Human Computer Interaction, Augmented Reality, Virtual Reality, and Mixed Reality are defined; examples of work being done in these fields in the AVR Lab are given. Current new and future work in Computer Vision, Speech Recognition, and Artificial Intelligence are also outlined.

  10. Northeast Artificial Intelligence Consortium Annual Report for 1987. Volume 7. Parallel, Structural, and Optimal Techniques in Vision

    DTIC Science & Technology

    1989-03-01

    Toys, is a model of the dinosaur Tyrannosaurus Rex . This particular test case is characterized by sharply discontinuous depths varying over a wide...are not shown in these figures). 7B-C-13 Figure 7: T. Rex Scene - Figure 8: T. Rex Scene - Left Image of Tinker Right Image Toy Object (j 1/’.) C...8217: Figure 9: T. Rex Scene - Figure 10: T. Rex Scene - Connected Contours Extracted Connected Contours Extracted from Left Image from Right Image 7B-C-14 400

  11. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  12. Three-dimensional vision enhances task performance independently of the surgical method.

    PubMed

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P < 0.0001-0.02). Simple tasks took 25 % to 30 % longer to complete and more complex tasks took 75 % longer with 2D than with 3D vision. Only the difficult task was performed faster with the robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  13. The precision measurement and assembly for miniature parts based on double machine vision systems

    NASA Astrophysics Data System (ADS)

    Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.

    2015-02-01

    In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.

  14. Computer vision for foreign body detection and removal in the food industry

    USDA-ARS?s Scientific Manuscript database

    Computer vision inspection systems are often used for quality control, product grading, defect detection and other product evaluation issues. This chapter focuses on the use of computer vision inspection systems that detect foreign bodies and remove them from the product stream. Specifically, we wi...

  15. Robotic Vision-Based Localization in an Urban Environment

    NASA Technical Reports Server (NTRS)

    Mchenry, Michael; Cheng, Yang; Matthies

    2007-01-01

    A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.

  16. Adaptive WTA with an analog VLSI neuromorphic learning chip.

    PubMed

    Häfliger, Philipp

    2007-03-01

    In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.

  17. Sign language recognition and translation: a multidisciplined approach from the field of artificial intelligence.

    PubMed

    Parton, Becky Sue

    2006-01-01

    In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the CopyCat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network systems). Avatars such as "Tessa" (Text and Sign Support Assistant; three-dimensional imaging) and spoken language to sign language translation systems such as Poland's project entitled "THETOS" (Text into Sign Language Automatic Translator, which operates in Polish; natural language processing) are addressed. The application of this research to education is also explored. The "ICICLE" (Interactive Computer Identification and Correction of Language Errors) project, for example, uses intelligent computer-aided instruction to build a tutorial system for deaf or hard-of-hearing children that analyzes their English writing and makes tailored lessons and recommendations. Finally, the article considers synthesized sign, which is being added to educational material and has the potential to be developed by students themselves.

  18. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  19. Automatic Satellite Telemetry Analysis for SSA using Artificial Intelligence Techniques

    NASA Astrophysics Data System (ADS)

    Stottler, R.; Mao, J.

    In April 2016, General Hyten, commander of Air Force Space Command, announced the Space Enterprise Vision (SEV) (http://www.af.mil/News/Article-Display/Article/719941/hyten-announces-space-enterprise-vision/). The SEV addresses increasing threats to space-related systems. The vision includes an integrated approach across all mission areas (communications, positioning, navigation and timing, missile warning, and weather data) and emphasizes improved access to data across the entire enterprise and the ability to protect space-related assets and capabilities. "The future space enterprise will maintain our nation's ability to deliver critical space effects throughout all phases of conflict," Hyten said. Satellite telemetry is going to become available to a new audience. While that telemetry information should be valuable for achieving Space Situational Awareness (SSA), these new satellite telemetry data consumers will not know how to utilize it. We were tasked with applying AI techniques to build an infrastructure to process satellite telemetry into higher abstraction level symbolic space situational awareness and to initially populate that infrastructure with useful data analysis methods. We are working with two organizations, Montana State University (MSU) and the Air Force Academy, both of whom control satellites and therefore currently analyze satellite telemetry to assess the health and circumstances of their satellites. The design which has resulted from our knowledge elicitation and cognitive task analysis is a hybrid approach which combines symbolic processing techniques of Case-Based Reasoning (CBR) and Behavior Transition Networks (BTNs) with current Machine Learning approaches. BTNs are used to represent the process and associated formulas to check telemetry values against anticipated problems and issues. CBR is used to represent and retrieve BTNs that represent an investigative process that should be applied to the telemetry in certain circumstances. Machine Learning is used to learn normal patterns of telemetry, learn pre-mission simulated telemetry patterns that represent known problems, and detect both pre-trained known and unknown abnormalities in real-time. The operational system is currently being implemented and applied to real satellite telemetry data. This paper presents the design, examples, and results of the first version as well as planned future work.

  20. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  1. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  2. An architecture for real-time vision processing

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong

    1994-01-01

    To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.

  3. A Machine Vision Quality Control System for Industrial Acrylic Fibre Production

    NASA Astrophysics Data System (ADS)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. Brázio; Dinis, João

    2002-12-01

    This paper describes the implementation of INFIBRA, a machine vision system used in the quality control of acrylic fibre production. The system was developed by INETI under a contract with a leading industrial manufacturer of acrylic fibres. It monitors several parameters of the acrylic production process. This paper presents, after a brief overview of the system, a detailed description of the machine vision algorithms developed to perform the inspection tasks unique to this system. Some of the results of online operation are also presented.

  4. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  5. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  6. Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)

    NASA Astrophysics Data System (ADS)

    Ashcraft, Todd W.; Atac, Robert

    2012-06-01

    Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.

  7. COMPARISON OF RECENTLY USED PHACOEMULSIFICATION SYSTEMS USING A HEALTH TECHNOLOGY ASSESSMENT METHOD.

    PubMed

    Huang, Jiannan; Wang, Qi; Zhao, Caimin; Ying, Xiaohua; Zou, Haidong

    2017-01-01

    To compare the recently used phacoemulsification systems using a health technology assessment (HTA) model. A self-administered questionnaire, which included questions to gauge on the opinions of the recently used phacoemulsification systems, was distributed to the chief cataract surgeons in the departments of ophthalmology of eighteen tertiary hospitals in Shanghai, China. A series of senile cataract patients undergoing phacoemulsification surgery were enrolled in the study. The surgical results and the average costs related to their surgeries were all recorded and compared for the recently used phacoemulsification systems. The four phacoemulsification systems currently used in Shanghai are the Infiniti Vision, Centurion Vision, WhiteStar Signature, and Stellaris Vision Enhancement systems. All of the doctors confirmed that the systems they used would help cataract patients recover vision. A total of 150 cataract patients who underwent phacoemulsification surgery were enrolled in the present study. A significant difference was found among the four groups in cumulative dissipated energy, with the lowest value found in the Centurion group. No serious complications were observed and a positive trend in visual acuity was found in all four groups after cataract surgery. The highest total cost of surgery was associated with procedures conducted using the Centurion Vision system, and significant differences between systems were mainly because of the cost of the consumables used in the different surgeries. This HTA comparison of four recently used phacoemulsification systems found that each of system offers a satisfactory vision recovery outcome, but differs in surgical efficacy and costs.

  8. Computational gestalts and perception thresholds.

    PubMed

    Desolneux, Agnès; Moisan, Lionel; Morel, Jean-Michel

    2003-01-01

    In 1923, Max Wertheimer proposed a research programme and method in visual perception. He conjectured the existence of a small set of geometric grouping laws governing the perceptual synthesis of phenomenal objects, or "gestalt" from the atomic retina input. In this paper, we review this set of geometric grouping laws, using the works of Metzger, Kanizsa and their schools. In continuation, we explain why the Gestalt theory research programme can be translated into a Computer Vision programme. This translation is not straightforward, since Gestalt theory never addressed two fundamental matters: image sampling and image information measurements. Using these advances, we shall show that gestalt grouping laws can be translated into quantitative laws allowing the automatic computation of gestalts in digital images. From the psychophysical viewpoint, a main issue is raised: the computer vision gestalt detection methods deliver predictable perception thresholds. Thus, we are set in a position where we can build artificial images and check whether some kind of agreement can be found between the computationally predicted thresholds and the psychophysical ones. We describe and discuss two preliminary sets of experiments, where we compared the gestalt detection performance of several subjects with the predictable detection curve. In our opinion, the results of this experimental comparison support the idea of a much more systematic interaction between computational predictions in Computer Vision and psychophysical experiments.

  9. A Standardized Obstacle Course for Assessment of Visual Function in Ultra Low Vision and Artificial Vision

    PubMed Central

    Nau, Amy Catherine; Pintar, Christine; Fisher, Christopher; Jeong, Jong-Hyeon; Jeong, KwonHo

    2014-01-01

    We describe an indoor, portable, standardized course that can be used to evaluate obstacle avoidance in persons who have ultralow vision. Six sighted controls and 36 completely blind but otherwise healthy adult male (n=29) and female (n=13) subjects (age range 19-85 years), were enrolled in one of three studies involving testing of the BrainPort sensory substitution device. Subjects were asked to navigate the course prior to, and after, BrainPort training. They completed a total of 837 course runs in two different locations. Means and standard deviations were calculated across control types, courses, lights, and visits. We used a linear mixed effects model to compare different categories in the PPWS (percent preferred walking speed) and error percent data to show that the course iterations were properly designed. The course is relatively inexpensive, simple to administer, and has been shown to be a feasible way to test mobility function. Data analysis demonstrates that for the outcome of percent error as well as for percentage preferred walking speed, that each of the three courses is different, and that within each level, each of the three iterations are equal. This allows for randomization of the courses during administration. Abbreviations: preferred walking speed (PWS) course speed (CS) percentage preferred walking speed (PPWS) PMID:24561717

  10. Artificial Intelligence and Information Retrieval.

    ERIC Educational Resources Information Center

    Teodorescu, Ioana

    1987-01-01

    Compares artificial intelligence and information retrieval paradigms for natural language understanding, reviews progress to date, and outlines the applicability of artificial intelligence to question answering systems. A list of principal artificial intelligence software for database front end systems is appended. (CLB)

  11. Task-focused modeling in automated agriculture

    NASA Astrophysics Data System (ADS)

    Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack

    1993-01-01

    Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.

  12. Health system vision of iran in 2025.

    PubMed

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Vision statement in evolutionary plan of health system is considered to be :"a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region(1) and with the regarding to health in all policies, accountability and innovation". An explanatory context was compiled either to create a complete image of the vision. Social values and leaders' strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders' strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system.

  13. 77 FR 36331 - Nineteenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-18

    ... Document--Draft DO-XXX, Minimum Aviation Performance Standards (MASPS) for an Enhanced Flight Vision System... Discussion (9:00 a.m.-5:00 p.m.) Provide Comment Resolution of Document--Draft DO-XXX, Minimum Aviation.../Approve FRAC Draft for PMC Consideration--Draft DO- XXX, Minimum Aviation Performance Standards (MASPS...

  14. Robust Spatial Autoregressive Modeling for Hardwood Log Inspection

    Treesearch

    Dongping Zhu; A.A. Beex

    1994-01-01

    We explore the application of a stochastic texture modeling method toward a machine vision system for log inspection in the forest products industry. This machine vision system uses computerized tomography (CT) imaging to locate and identify internal defects in hardwood logs. The application of CT to such industrial vision problems requires efficient and robust image...

  15. Augmentation of Cognition and Perception Through Advanced Synthetic Vision Technology

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.; Arthur, Jarvis J.; Williams, Steve P.; McNabb, Jennifer

    2005-01-01

    Synthetic Vision System technology augments reality and creates a virtual visual meteorological condition that extends a pilot's cognitive and perceptual capabilities during flight operations when outside visibility is restricted. The paper describes the NASA Synthetic Vision System for commercial aviation with an emphasis on how the technology achieves Augmented Cognition objectives.

  16. Industrial Inspection with Open Eyes: Advance with Machine Vision Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Niel, Kurt

    Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less

  17. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  18. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    PubMed

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  19. Tracking by Identification Using Computer Vision and Radio

    PubMed Central

    Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez

    2013-01-01

    We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485

  20. Design of a surgical robot with dynamic vision field control for Single Port Endoscopic Surgery.

    PubMed

    Kobayashi, Yo; Sekiguchi, Yuta; Tomono, Yu; Watanabe, Hiroki; Toyoda, Kazutaka; Konishi, Kozo; Tomikawa, Morimasa; Ieiri, Satoshi; Tanoue, Kazuo; Hashizume, Makoto; Fujie, Masaktsu G

    2010-01-01

    Recently, a robotic system was developed to assist Single Port Endoscopic Surgery (SPS). However, the existing system required a manual change of vision field, hindering the surgical task and increasing the degrees of freedom (DOFs) of the manipulator. We proposed a surgical robot for SPS with dynamic vision field control, the endoscope view being manipulated by a master controller. The prototype robot consisted of a positioning and sheath manipulator (6 DOF) for vision field control, and dual tool tissue manipulators (gripping: 5DOF, cautery: 3DOF). Feasibility of the robot was demonstrated in vitro. The "cut and vision field control" (using tool manipulators) is suitable for precise cutting tasks in risky areas while a "cut by vision field control" (using a vision field control manipulator) is effective for rapid macro cutting of tissues. A resection task was accomplished using a combination of both methods.

  1. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  2. FLORA™: Phase I development of a functional vision assessment for prosthetic vision users.

    PubMed

    Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy

    2015-07-01

    Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population to evaluate the impact of new vision-restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional visual ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional visual tasks for observation of performance and a case narrative summary. Results were analysed to determine whether the interview questions and functional visual tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, 26 subjects were assessed with the FLORA. Seven different evaluators administered the assessment. All 14 interview questions were asked. All 35 tasks for functional vision were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options—impossible (33 per cent), difficult (23 per cent), moderate (24 per cent) and easy (19 per cent)—were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with 'vision only' occurring 75 per cent on average with the System ON, and 29 per cent with the System OFF. The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as an assessment tool for functional vision and well-being. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  3. The role of vision processing in prosthetic vision.

    PubMed

    Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette

    2012-01-01

    Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.

  4. Climate Change Adaptation: Putting Principles into Practice

    NASA Astrophysics Data System (ADS)

    Ausden, Malcolm

    2014-10-01

    Carrying out wildlife conservation in a changing climate requires planning on long timescales at both a site and network level, while also having the flexibility to adapt actions at sites over short timescales in response to changing conditions and new information. The Royal Society for the Protection of Birds (RSPB), a land-owning wildlife conservation charity in the UK, achieves this on its nature reserves through its system of management planning. This involves setting network-wide objectives which inform the 25-year vision and 5-year conservation objectives for each site. Progress toward achieving each site's conservation objectives is reviewed annually, to identify any adjustments which might be needed to the site's management. The conservation objectives and 25-year vision of each site are reviewed every 5 years. Significant long-term impacts of climate change most frequently identified at RSPB reserves are: loss of intertidal habitat through coastal squeeze, loss of low-lying islands due to higher sea levels and coastal erosion, loss of coastal freshwater and brackish wetlands due to increased coastal flooding, and changes in the hydrology of wetlands. The main types of adaptation measures in place on RSPB reserves to address climate change-related impacts are: re-creation of intertidal habitat, re-creation and restoration of freshwater wetlands away from vulnerable coastal areas, blocking artificial drainage on peatlands, and addressing pressures on freshwater supply for lowland wet grasslands in eastern and southeastern England. Developing partnerships between organizations has been crucial in delivering large-scale adaptation projects.

  5. Hierarchical Modelling Of Mobile, Seeing Robots

    NASA Astrophysics Data System (ADS)

    Luh, Cheng-Jye; Zeigler, Bernard P.

    1990-03-01

    This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.

  6. Hierarchical modelling of mobile, seeing robots

    NASA Technical Reports Server (NTRS)

    Luh, Cheng-Jye; Zeigler, Bernard P.

    1990-01-01

    This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.

  7. Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.

    1992-01-01

    This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.

  8. Marking parts to aid robot vision

    NASA Technical Reports Server (NTRS)

    Bales, J. W.; Barker, L. K.

    1981-01-01

    The premarking of parts for subsequent identification by a robot vision system appears to be beneficial as an aid in the automation of certain tasks such as construction in space. A simple, color coded marking system is presented which allows a computer vision system to locate an object, calculate its orientation, and determine its identity. Such a system has the potential to operate accurately, and because the computer shape analysis problem has been simplified, it has the ability to operate in real time.

  9. ROBOSIGHT: Robotic Vision System For Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh

    1989-02-01

    Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.

  10. Improving Vision-Based Motor Rehabilitation Interactive Systems for Users with Disabilities Using Mirror Feedback

    PubMed Central

    Martínez-Bueso, Pau; Moyà-Alcover, Biel

    2014-01-01

    Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (T s) and time-to-complete (T c)). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (T s = 7.09 (P < 0.001) and T c = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310

  11. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    NASA Astrophysics Data System (ADS)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  12. An overview of computer vision

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  13. The Efficacy of Optometric Vision Therapy.

    ERIC Educational Resources Information Center

    Journal of the American Optometric Association, 1988

    1988-01-01

    This review aims to document the efficacy and validity of vision therapy for modifying and improving vision functioning. The paper describes the essential components of the visual system and disorders which can be physiologically and clinically identified. Vision therapy is defined as a clinical approach for correcting and ameliorating the effects…

  14. Functional vision and cognition in infants with congenital disorders of the peripheral visual system.

    PubMed

    Dale, Naomi; Sakkalou, Elena; O'Reilly, Michelle; Springall, Clare; De Haan, Michelle; Salt, Alison

    2017-07-01

    To investigate how vision relates to early development by studying vision and cognition in a national cohort of 1-year-old infants with congenital disorders of the peripheral visual system and visual impairment. This was a cross-sectional observational investigation of a nationally recruited cohort of infants with 'simple' and 'complex' congenital disorders of the peripheral visual system. Entry age was 8 to 16 months. Vision level (Near Detection Scale) and non-verbal cognition (sensorimotor understanding, Reynell Zinkin Scales) were assessed. Parents completed demographic questionnaires. Of 90 infants (49 males, 41 females; mean 13mo, standard deviation [SD] 2.5mo; range 7-17mo); 25 (28%) had profound visual impairment (light perception at best) and 65 (72%) had severe visual impairment (basic 'form' vision). The Near Detection Scale correlated significantly with sensorimotor understanding developmental quotients in the 'total', 'simple', and 'complex' groups (all p<0.001). Age and vision accounted for 48% of sensorimotor understanding variance. Infants with profound visual impairment, especially in the 'complex' group with congenital disorders of the peripheral visual system with known brain involvement, showed the greatest cognitive delay. Lack of vision is associated with delayed early-object manipulative abilities and concepts; 'form' vision appeared to support early developmental advance. This paper provides baseline characteristics for cross-sectional and longitudinal follow-up investigations in progress. A methodological strength of the study was the representativeness of the cohort according to national epidemiological and population census data. © 2017 Mac Keith Press.

  15. New approach for cognitive analysis and understanding of medical patterns and visualizations

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Tadeusiewicz, Ryszard

    2003-11-01

    This paper presents new opportunities for applying linguistic description of the picture merit content and AI methods to undertake tasks of the automatic understanding of images semantics in intelligent medical information systems. A successful obtaining of the crucial semantic content of the medical image may contribute considerably to the creation of new intelligent multimedia cognitive medical systems. Thanks to the new idea of cognitive resonance between stream of the data extracted from the image using linguistic methods and expectations taken from the representaion of the medical knowledge, it is possible to understand the merit content of the image even if teh form of the image is very different from any known pattern. This article proves that structural techniques of artificial intelligence may be applied in the case of tasks related to automatic classification and machine perception based on semantic pattern content in order to determine the semantic meaning of the patterns. In the paper are described some examples presenting ways of applying such techniques in the creation of cognitive vision systems for selected classes of medical images. On the base of scientific research described in the paper we try to build some new systems for collecting, storing, retrieving and intelligent interpreting selected medical images especially obtained in radiological and MRI examinations.

  16. The future of obstetrics/gynecology in 2020: a clearer vision. Transformational forces and thriving in the new system.

    PubMed

    Lagrew, David C; Jenkins, Todd R

    2015-01-01

    Revamping the delivery of women's health care to meet future demands will require a number of changes. In the first 2 articles of this series, we introduced the reasons for change, suggested the use of the 'Triple Aim' concept to (1) improve the health of a population, (2) enhance the patient experience, and (3) control costs as a guide post for changes, and reviewed the transformational forces of payment and care system reform. In the final article, we discuss the valuable use of information technology and disruptive clinical technologies. The new health care system will require a digital transformation so that there can be increased communication, availability of information, and ongoing assessment of clinical care. This will allow for more cost-effective and individualized treatments as data are securely shared between patients and providers. Scientific advances that radically change clinical practice are coming at an accelerated pace as the underlying technologies of genetics, robotics, artificial intelligence, and molecular biology are translated into tools for diagnosis and treatment. Thriving in the new system not only will require time-honored traits such as leadership and compassion but also will require the obstetrician/gynecologist to become comfortable with technology, care redesign, and quality improvement. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Autonomous Collision-Free Navigation of Microvehicles in Complex and Dynamically Changing Environments.

    PubMed

    Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph

    2017-09-26

    Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.

  18. Low computation vision-based navigation for a Martian rover

    NASA Technical Reports Server (NTRS)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  19. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  20. Software model of a machine vision system based on the common house fly.

    PubMed

    Madsen, Robert; Barrett, Steven; Wilcox, Michael

    2005-01-01

    The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.

  1. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  2. Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors

    PubMed Central

    Everding, Lukas; Conradt, Jörg

    2018-01-01

    In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor. PMID:29515386

  3. Application of artificial intelligence to the management of urological cancer.

    PubMed

    Abbod, Maysam F; Catto, James W F; Linkens, Derek A; Hamdy, Freddie C

    2007-10-01

    Artificial intelligence techniques, such as artificial neural networks, Bayesian belief networks and neuro-fuzzy modeling systems, are complex mathematical models based on the human neuronal structure and thinking. Such tools are capable of generating data driven models of biological systems without making assumptions based on statistical distributions. A large amount of study has been reported of the use of artificial intelligence in urology. We reviewed the basic concepts behind artificial intelligence techniques and explored the applications of this new dynamic technology in various aspects of urological cancer management. A detailed and systematic review of the literature was performed using the MEDLINE and Inspec databases to discover reports using artificial intelligence in urological cancer. The characteristics of machine learning and their implementation were described and reports of artificial intelligence use in urological cancer were reviewed. While most researchers in this field were found to focus on artificial neural networks to improve the diagnosis, staging and prognostic prediction of urological cancers, some groups are exploring other techniques, such as expert systems and neuro-fuzzy modeling systems. Compared to traditional regression statistics artificial intelligence methods appear to be accurate and more explorative for analyzing large data cohorts. Furthermore, they allow individualized prediction of disease behavior. Each artificial intelligence method has characteristics that make it suitable for different tasks. The lack of transparency of artificial neural networks hinders global scientific community acceptance of this method but this can be overcome by neuro-fuzzy modeling systems.

  4. Artificial vision by multi-layered neural networks: neocognitron and its advances.

    PubMed

    Fukushima, Kunihiko

    2013-01-01

    The neocognitron is a neural network model proposed by Fukushima (1980). Its architecture was suggested by neurophysiological findings on the visual systems of mammals. It is a hierarchical multi-layered network. It acquires the ability to robustly recognize visual patterns through learning. Although the neocognitron has a long history, modifications of the network to improve its performance are still going on. For example, a recent neocognitron uses a new learning rule, named add-if-silent, which makes the learning process much simpler and more stable. Nevertheless, a high recognition rate can be kept with a smaller scale of the network. Referring to the history of the neocognitron, this paper discusses recent advances in the neocognitron. We also show that various new functions can be realized by, for example, introducing top-down connections to the neocognitron: mechanism of selective attention, recognition and completion of partly occluded patterns, restoring occluded contours, and so on. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Death of Darkness: Artificial Sky Brightness in the Anthropocene

    NASA Astrophysics Data System (ADS)

    Zender, C. S.

    2016-12-01

    Many species (including ours) need darkness to survive and thrive yet light pollution in the anthropocene has received scant attention in Earth System Models (ESMs). Anthropogenic aerosols can brighten background sky brightness and reduce the contrast between skylight and starlight. These are both aesthetic and health-related issues due to their accompanying disruption of circadian rhythms. We quantify aerosol contributions to light pollution using a single-column night sky model, NiteLite, suitable for implementation in ESMs. NiteLite accounts for physiologcal (photopic and scotopic vision, retinal diameter/age), anthropogenic (light and aerosol pollution properties), and natural (surface albedo, trace gases) effects on background brightness and threshold visibility. We find that stratospheric aerosol injection contemplated as a stop-gap measure to counter global warming would increase night-sky brightness by about 25%, and thus eliminate last pristine dark sky areas on Earth. Our results suggest that ESMs incorporate light pollution so that associated societal impacts can be better quantified and included in policy deliberations.

  6. What makes viewpoint-invariant properties perceptually salient?

    PubMed

    Jacobs, David W

    2003-07-01

    It has been noted that many of the perceptually salient image properties identified by the Gestalt psychologists, such as collinearity, parallelism, and good continuation, age invariant to changes in viewpoint. However, I show that viewpoint invariance is not sufficient to distinguish these Gestalt properties; one can define an infinite number of viewpoint-invariant properties that are not perceptually salient. I then show that generally, the perceptually salient viewpoint-invariant properties are minimal, in the sense that they can be derived by using less image information than for nonsalient properties. This finding provides support for the hypothesis that the biological relevance of an image property is determined both by the extent to which it provides information about the world and by the ease with which this property can be computed. [An abbreviated version of this work, including technical details that are avoided in this paper, is contained in K. Boyer and S. Sarker, eds., Perceptual Organization for Artificial Vision Systems (Kluwer Academic, Dordrecht, The Netherlands, 2000), pp. 121-138.

  7. A screen for constituents of motor control and decision making in Drosophila reveals visual distance-estimation neurons

    PubMed Central

    Triphan, Tilman; Nern, Aljoscha; Roberts, Sonia F.; Korff, Wyatt; Naiman, Daniel Q.; Strauss, Roland

    2016-01-01

    Climbing over chasms larger than step size is vital to fruit flies, since foraging and mating are achieved while walking. Flies avoid futile climbing attempts by processing parallax-motion vision to estimate gap width. To identify neuronal substrates of climbing control, we screened a large collection of fly lines with temporarily inactivated neuronal populations in a novel high-throughput assay described here. The observed climbing phenotypes were classified; lines in each group are reported. Selected lines were further analysed by high-resolution video cinematography. One striking class of flies attempts to climb chasms of unsurmountable width; expression analysis guided us to C2 optic-lobe interneurons. Inactivation of C2 or the closely related C3 neurons with highly specific intersectional driver lines consistently reproduced hyperactive climbing whereas strong or weak artificial depolarization of C2/C3 neurons strongly or mildly decreased climbing frequency. Contrast-manipulation experiments support our conclusion that C2/C3 neurons are part of the distance-evaluation system. PMID:27255169

  8. A binocular approach to treating amblyopia: antisuppression therapy.

    PubMed

    Hess, Robert F; Mansouri, Behzad; Thompson, Benjamin

    2010-09-01

    We developed a binocular treatment for amblyopia based on antisuppression therapy. A novel procedure is outlined for measuring the extent to which the fixing eye suppresses the fellow amblyopic eye. We hypothesize that suppression renders a structurally binocular system, functionally monocular. We demonstrate using three strabismic amblyopes that information can be combined normally between their eyes under viewing conditions where suppression is reduced. Also, we show that prolonged periods of viewing (under the artificial conditions of stimuli of different contrast in each eye) during which information from the two eyes is combined leads to a strengthening of binocular vision in such cases and eventual combination of binocular information under natural viewing conditions (stimuli of the same contrast in each eye). Concomitant improvement in monocular acuity of the amblyopic eye occurs with this reduction in suppression and strengthening of binocular fusion. Furthermore, in each of the three cases, stereoscopic function is established. This provides the basis for a new treatment of amblyopia, one that is purely binocular and aimed at reducing suppression as a first step.

  9. [Does music influence visual perception in campimetric measurements of the visual field?].

    PubMed

    Gall, Carolin; Geier, Jens-Stefan; Sabel, Bernhard A; Kasten, Erich

    2009-01-01

    21 subjects (mean age 28,4 +/- 10,9, M +/- SD) without any damage of the visual system were examined with computer-based campimetric tests of near threshold stimulus detection whereby an artificial tunnel vision was induced. Campimetry was performed in four trials in randomized order using a within-subjects-design: 1. classical music, 2. Techno music, 3. music for relaxation and 4. no music. Results were slightly better in all music conditions. Performance was best when subjects were listening to Techno music. The average increase of correctly recognized stimuli and fixation controls amounted to 3 %. To check the stability of the effects 9 subjects were tested three times. A moderating influence of personality traits and habits of listening to music was tested but could not be found. We conclude that music has at least no negative influence on performance in the campimetric measurement. Reasons for the positive effects of music can be seen in a general increase of vigilance and a modulation of perceptual thresholds.

  10. Energy-efficient lighting system for television

    DOEpatents

    Cawthorne, Duane C.

    1987-07-21

    A light control system for a television camera comprises an artificial light control system which is cooperative with an iris control system. This artificial light control system adjusts the power to lamps illuminating the camera viewing area to provide only sufficient artificial illumination necessary to provide a sufficient video signal when the camera iris is substantially open.

  11. Health System Vision of Iran in 2025

    PubMed Central

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Background: Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. Method: After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Results: Vision statement in evolutionary plan of health system is considered to be :“a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region1 and with the regarding to health in all policies, accountability and innovation”. An explanatory context was compiled either to create a complete image of the vision. Conclusion: Social values and leaders’ strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders’ strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system. PMID:23865011

  12. Performance of Single-Use FlexorVue vs Reusable BoaVision Ureteroscope for Visualization of Calices and Stone Extraction in an Artificial Kidney Model.

    PubMed

    Schlager, Daniel; Hein, Simon; Obaid, Moaaz Abdulghani; Wilhelm, Konrad; Miernik, Arkadiusz; Schoenthaler, Martin

    2017-11-01

    To evaluate and compare Flexor ® Vue™, a semidisposable endoscopic deflection system with disposable ureteral sheath and reusable visualization source, and a nondisposable fiber optic ureteroscope in a standard in vitro setting. FlexorVue and a reusable fiber optic flexible ureteroscope were each tested in an artificial kidney model. The experimental setup included the visualization of colored pearls and the extraction of calculi with two different extraction devices (NCircle ® and NGage ® ). The procedures were performed by six experienced surgeons. Visualization time, access to calices, successful stone retraction, and time required were recorded. In addition, the surgeons' workload and subjective performance were determined according to the National Aeronautics and Space Administration-task load index (NASA-TLX). We referred to the Likert scale to assess maneuverability, handling, and image quality. Nearly all calices (99%) were correctly identified using the reusable scope, indicating full kidney access, whereas 74% of the calices were visualized using FlexorVue, of which 81% were correctly identified. Access to the lower poles of the kidney model was significantly less likely with the disposable device, and time to completion was significantly longer (755 s vs 153 s, p < 0.001). The stone clearance success rate with the disposable device was 23% using the NGage and 13% using the NCircle basket. Overall NASA-TLX scores were significantly higher using FlexorVue. The conventional reusable device also demonstrated superior maneuverability, handling, and image quality. FlexorVue offers a semidisposable deflecting endoscopic system allowing basic ureteroscopic and cystoscopic procedures. For its use as an addition or replacement for current reusable scopes, it requires substantial technical improvements.

  13. Transformational Spaceport and Range Concept of Operations: A Vision to Transform Ground and Launch Operations

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Transformational Concept of Operations (CONOPS) provides a long-term, sustainable vision for future U.S. space transportation infrastructure and operations. This vision presents an interagency concept, developed cooperatively by the Department of Defense (DoD), the Federal Aviation Administration (FAA), and the National Aeronautics and Space Administration (NASA) for the upgrade, integration, and improved operation of major infrastructure elements of the nation s space access systems. The interagency vision described in the Transformational CONOPS would transform today s space launch infrastructure into a shared system that supports worldwide operations for a variety of users. The system concept is sufficiently flexible and adaptable to support new types of missions for exploration, commercial enterprise, and national security, as well as to endure further into the future when space transportation technology may be sufficiently advanced to enable routine public space travel as part of the global transportation system. The vision for future space transportation operations is based on a system-of-systems architecture that integrates the major elements of the future space transportation system - transportation nodes (spaceports), flight vehicles and payloads, tracking and communications assets, and flight traffic coordination centers - into a transportation network that concurrently accommodates multiple types of mission operators, payloads, and vehicle fleets. This system concept also establishes a common framework for defining a detailed CONOPS for the major elements of the future space transportation system. The resulting set of four CONOPS (see Figure 1 below) describes the common vision for a shared future space transportation system (FSTS) infrastructure from a variety of perspectives.

  14. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  15. [Review of wireless energy transmission system for total artificial heart].

    PubMed

    Zhang, Chi; Yang, Ming

    2009-11-01

    This paper sums up the fundamental structure of wireless energy transmission system for total artificial heart, and compares the key parameters and performance of some representative systems. After that, it is discussed that the future development trend of wireless energy transmission system for total artificial heart.

  16. The genetics of normal and defective color vision

    PubMed Central

    Neitz, Jay; Neitz, Maureen

    2011-01-01

    The contributions of genetics research to the science of normal and defective color vision over the previous few decades are reviewed emphasizing the developments in the 25 years since the last anniversary issue of Vision Research. Understanding of the biology underlying color vision has been vaulted forward through the application of the tools of molecular genetics. For all their complexity, the biological processes responsible for color vision are more accessible than for many other neural systems. This is partly because of the wealth of genetic variations that affect color perception, both within and across species, and because components of the color vision system lend themselves to genetic manipulation. Mutations and rearrangements in the genes encoding the long, middle, and short wavelength sensitive cone pigments are responsible for color vision deficiencies and mutations have been identified that affect the number of cone types, the absorption spectrum of the pigments, the functionality and viability of the cones, and the topography of the cone mosaic. The addition of an opsin gene, as occurred in the evolution of primate color vision, and has been done in experimental animals can produce expanded color vision capacities and this has provided insight into the underlying neural circuitry. PMID:21167193

  17. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  18. Machine vision system for online inspection of freshly slaughtered chickens

    USDA-ARS?s Scientific Manuscript database

    A machine vision system was developed and evaluated for the automation of online inspection to differentiate freshly slaughtered wholesome chickens from systemically diseased chickens. The system consisted of an electron-multiplying charge-coupled-device camera used with an imaging spectrograph and ...

  19. Trauma-Informed Part C Early Intervention: A Vision, A Challenge, A New Reality

    ERIC Educational Resources Information Center

    Gilkerson, Linda; Graham, Mimi; Harris, Deborah; Oser, Cindy; Clarke, Jane; Hairston-Fuller, Tody C.; Lertora, Jessica

    2013-01-01

    Federal directives require that any child less than 3 years old with a substantiated case of abuse be referred to the early intervention (EI) system. This article details the need and presents a vision for a trauma-informed EI system. The authors describe two exemplary program models which implement this vision and recommend steps which the field…

  20. Colour preferences of UK garden birds at supplementary seed feeders.

    PubMed

    Rothery, Luke; Scott, Graham W; Morrell, Lesley J

    2017-01-01

    Supplementary feeding of garden birds generally has benefits for both bird populations and human wellbeing. Birds have excellent colour vision, and show preferences for food items of particular colours, but research into colour preferences associated with artificial feeders is limited to hummingbirds. Here, we investigated the colour preferences of common UK garden birds foraging at seed-dispensing artificial feeders containing identical food. We presented birds simultaneously with an array of eight differently coloured feeders, and recorded the number of visits made to each colour over 370 30-minute observation periods in the winter of 2014/15. In addition, we surveyed visitors to a garden centre and science festival to determine the colour preferences of likely purchasers of seed feeders. Our results suggest that silver and green feeders were visited by higher numbers of individuals of several common garden bird species, while red and yellow feeders received fewer visits. In contrast, people preferred red, yellow, blue and green feeders. We suggest that green feeders may be simultaneously marketable and attractive to foraging birds.

Top