Sample records for human vision system

  1. Vision Systems with the Human in the Loop

    NASA Astrophysics Data System (ADS)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  2. Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.

    1992-01-01

    This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.

  3. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  4. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  5. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    PubMed Central

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-01

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144

  6. Art, Illusion and the Visual System.

    ERIC Educational Resources Information Center

    Livingstone, Margaret S.

    1988-01-01

    Describes the three part system of human vision. Explores the anatomical arrangement of the vision system from the eyes to the brain. Traces the path of various visual signals to their interpretations by the brain. Discusses human visual perception and its implications in art and design. (CW)

  7. Office of Biological and Physical Research: Overview Transitioning to the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Crouch, Roger

    2004-01-01

    Viewgraphs on NASA's transition to its vision for space exploration is presented. The topics include: 1) Strategic Directives Guiding the Human Support Technology Program; 2) Progressive Capabilities; 3) A Journey to Inspire, Innovate, and Discover; 4) Risk Mitigation Status Technology Readiness Level (TRL) and Countermeasures Readiness Level (CRL); 5) Biological And Physical Research Enterprise Aligning With The Vision For U.S. Space Exploration; 6) Critical Path Roadmap Reference Missions; 7) Rating Risks; 8) Current Critical Path Roadmap (Draft) Rating Risks: Human Health; 9) Current Critical Path Roadmap (Draft) Rating Risks: System Performance/Efficiency; 10) Biological And Physical Research Enterprise Efforts to Align With Vision For U.S. Space Exploration; 11) Aligning with the Vision: Exploration Research Areas of Emphasis; 12) Code U Efforts To Align With The Vision For U.S. Space Exploration; 13) Types of Critical Path Roadmap Risks; and 14) ISS Human Support Systems Research, Development, and Demonstration. A summary discussing the vision for U.S. space exploration is also provided.

  8. GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa

    2004-01-01

    The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.

  9. Helicopter human factors

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.

    1988-01-01

    The state-of-the-art helicopter and its pilot are examined using the tools of human-factors analysis. The significant role of human error in helicopter accidents is discussed; the history of human-factors research on helicopters is briefly traced; the typical flight tasks are described; and the noise, vibration, and temperature conditions typical of modern military helicopters are characterized. Also considered are helicopter controls, cockpit instruments and displays, and the impact of cockpit design on pilot workload. Particular attention is given to possible advanced-technology improvements, such as control stabilization and augmentation, FBW and fly-by-light systems, multifunction displays, night-vision goggles, pilot night-vision systems, night-vision displays with superimposed symbols, target acquisition and designation systems, and aural displays. Diagrams, drawings, and photographs are provided.

  10. Machine Vision Giving Eyes to Robots. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  11. Vision-based navigation in a dynamic environment for virtual human

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Sun, Ji-Zhou; Zhang, Jia-Wan; Li, Ming-Chu

    2004-06-01

    Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.

  12. Health systems analysis of eye care services in Zambia: evaluating progress towards VISION 2020 goals.

    PubMed

    Bozzani, Fiammetta Maria; Griffiths, Ulla Kou; Blanchet, Karl; Schmidt, Elena

    2014-02-28

    VISION 2020 is a global initiative launched in 1999 to eliminate avoidable blindness by 2020. The objective of this study was to undertake a situation analysis of the Zambian eye health system and assess VISION 2020 process indicators on human resources, equipment and infrastructure. All eye health care providers were surveyed to determine location, financing sources, human resources and equipment. Key informants were interviewed regarding levels of service provision, management and leadership in the sector. Policy papers were reviewed. A health system dynamics framework was used to analyse findings. During 2011, 74 facilities provided eye care in Zambia; 39% were public, 37% private for-profit and 24% owned by Non-Governmental Organizations. Private facilities were solely located in major cities. A total of 191 people worked in eye care; 18 of these were ophthalmologists and eight cataract surgeons, equivalent to 0.34 and 0.15 per 250,000 population, respectively. VISION 2020 targets for inpatient beds and surgical theatres were met in six out of nine provinces, but human resources and spectacles manufacturing workshops were below target in every province. Inequalities in service provision between urban and rural areas were substantial. Shortage and maldistribution of human resources, lack of routine monitoring and inadequate financing mechanisms are the root causes of underperformance in the Zambian eye health system, which hinder the ability to achieve the VISION 2020 goals. We recommend that all VISION 2020 process indicators are evaluated simultaneously as these are not individually useful for monitoring progress.

  13. Probing the Solar System

    ERIC Educational Resources Information Center

    Wilkinson, John

    2013-01-01

    Humans have always had the vision to one day live on other planets. This vision existed even before the first person was put into orbit. Since the early space missions of putting humans into orbit around Earth, many advances have been made in space technology. We have now sent many space probes deep into the Solar system to explore the planets and…

  14. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  15. The Physics of Colour Vision.

    ERIC Educational Resources Information Center

    Goldman, Martin

    1985-01-01

    An elementary physical model of cone receptor cells is explained and applied to complexities of human color vision. One-, two-, and three-receptor systems are considered, with the later shown to be the best model for the human eye. Color blindness is also discussed. (DH)

  16. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  17. Health systems analysis of eye care services in Zambia: evaluating progress towards VISION 2020 goals

    PubMed Central

    2014-01-01

    Background VISION 2020 is a global initiative launched in 1999 to eliminate avoidable blindness by 2020. The objective of this study was to undertake a situation analysis of the Zambian eye health system and assess VISION 2020 process indicators on human resources, equipment and infrastructure. Methods All eye health care providers were surveyed to determine location, financing sources, human resources and equipment. Key informants were interviewed regarding levels of service provision, management and leadership in the sector. Policy papers were reviewed. A health system dynamics framework was used to analyse findings. Results During 2011, 74 facilities provided eye care in Zambia; 39% were public, 37% private for-profit and 24% owned by Non-Governmental Organizations. Private facilities were solely located in major cities. A total of 191 people worked in eye care; 18 of these were ophthalmologists and eight cataract surgeons, equivalent to 0.34 and 0.15 per 250,000 population, respectively. VISION 2020 targets for inpatient beds and surgical theatres were met in six out of nine provinces, but human resources and spectacles manufacturing workshops were below target in every province. Inequalities in service provision between urban and rural areas were substantial. Conclusion Shortage and maldistribution of human resources, lack of routine monitoring and inadequate financing mechanisms are the root causes of underperformance in the Zambian eye health system, which hinder the ability to achieve the VISION 2020 goals. We recommend that all VISION 2020 process indicators are evaluated simultaneously as these are not individually useful for monitoring progress. PMID:24575919

  18. Vision

    NASA Technical Reports Server (NTRS)

    Taylor, J. H.

    1973-01-01

    Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.

  19. Vision based assistive technology for people with dementia performing activities of daily living (ADLs): an overview

    NASA Astrophysics Data System (ADS)

    As'ari, M. A.; Sheikh, U. U.

    2012-04-01

    The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.

  20. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  1. Photopic transduction implicated in human circadian entrainment

    NASA Technical Reports Server (NTRS)

    Zeitzer, J. M.; Kronauer, R. E.; Czeisler, C. A.

    1997-01-01

    Despite the preeminence of light as the synchronizer of the circadian timing system, the phototransductive machinery in mammals which transmits photic information from the retina to the hypothalamic circadian pacemaker remains largely undefined. To determine the class of photopigments which this phototransductive system uses, we exposed a group (n = 7) of human subjects to red light below the sensitivity threshold of a scotopic (i.e. rhodopsin/rod-based) system, yet of sufficient strength to activate a photopic (i.e. cone-based) system. Exposure to this light stimulus was sufficient to reset significantly the human circadian pacemaker, indicating that the cone pigments which mediate color vision can also mediate circadian vision.

  2. Helicopter flights with night-vision goggles: Human factors aspects

    NASA Technical Reports Server (NTRS)

    Brickner, Michael S.

    1989-01-01

    Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques.

  3. Operator-coached machine vision for space telerobotics

    NASA Technical Reports Server (NTRS)

    Bon, Bruce; Wilcox, Brian; Litwin, Todd; Gennery, Donald B.

    1991-01-01

    A prototype system for interactive object modeling has been developed and tested. The goal of this effort has been to create a system which would demonstrate the feasibility of high interactive operator-coached machine vision in a realistic task environment, and to provide a testbed for experimentation with various modes of operator interaction. The purpose for such a system is to use human perception where machine vision is difficult, i.e., to segment the scene into objects and to designate their features, and to use machine vision to overcome limitations of human perception, i.e., for accurate measurement of object geometry. The system captures and displays video images from a number of cameras, allows the operator to designate a polyhedral object one edge at a time by moving a 3-D cursor within these images, performs a least-squares fit of the designated edges to edge data detected with a modified Sobel operator, and combines the edges thus detected to form a wire-frame object model that matches the Sobel data.

  4. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    PubMed

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  5. A self-learning camera for the validation of highly variable and pseudorandom patterns

    NASA Astrophysics Data System (ADS)

    Kelley, Michael

    2004-05-01

    Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.

  6. Classification Objects, Ideal Observers & Generative Models

    ERIC Educational Resources Information Center

    Olman, Cheryl; Kersten, Daniel

    2004-01-01

    A successful vision system must solve the problem of deriving geometrical information about three-dimensional objects from two-dimensional photometric input. The human visual system solves this problem with remarkable efficiency, and one challenge in vision research is to understand how neural representations of objects are formed and what visual…

  7. Development of an Automatic Testing Platform for Aviator's Night Vision Goggle Honeycomb Defect Inspection.

    PubMed

    Jian, Bo-Lin; Peng, Chao-Chung

    2017-06-15

    Due to the direct influence of night vision equipment availability on the safety of night-time aerial reconnaissance, maintenance needs to be carried out regularly. Unfortunately, some defects are not easy to observe or are not even detectable by human eyes. As a consequence, this study proposed a novel automatic defect detection system for aviator's night vision imaging systems AN/AVS-6(V)1 and AN/AVS-6(V)2. An auto-focusing process consisting of a sharpness calculation and a gradient-based variable step search method is applied to achieve an automatic detection system for honeycomb defects. This work also developed a test platform for sharpness measurement. It demonstrates that the honeycomb defects can be precisely recognized and the number of the defects can also be determined automatically during the inspection. Most importantly, the proposed approach significantly reduces the time consumption, as well as human assessment error during the night vision goggle inspection procedures.

  8. Mobile Autonomous Humanoid Assistant

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.

    2004-01-01

    A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  9. Bio-inspired approach for intelligent unattended ground sensors

    NASA Astrophysics Data System (ADS)

    Hueber, Nicolas; Raymond, Pierre; Hennequin, Christophe; Pichler, Alexander; Perrot, Maxime; Voisin, Philippe; Moeglin, Jean-Pierre

    2015-05-01

    Improving the surveillance capacity over wide zones requires a set of smart battery-powered Unattended Ground Sensors capable of issuing an alarm to a decision-making center. Only high-level information has to be sent when a relevant suspicious situation occurs. In this paper we propose an innovative bio-inspired approach that mimics the human bi-modal vision mechanism and the parallel processing ability of the human brain. The designed prototype exploits two levels of analysis: a low-level panoramic motion analysis, the peripheral vision, and a high-level event-focused analysis, the foveal vision. By tracking moving objects and fusing multiple criteria (size, speed, trajectory, etc.), the peripheral vision module acts as a fast relevant event detector. The foveal vision module focuses on the detected events to extract more detailed features (texture, color, shape, etc.) in order to improve the recognition efficiency. The implemented recognition core is able to acquire human knowledge and to classify in real-time a huge amount of heterogeneous data thanks to its natively parallel hardware structure. This UGS prototype validates our system approach under laboratory tests. The peripheral analysis module demonstrates a low false alarm rate whereas the foveal vision correctly focuses on the detected events. A parallel FPGA implementation of the recognition core succeeds in fulfilling the embedded application requirements. These results are paving the way of future reconfigurable virtual field agents. By locally processing the data and sending only high-level information, their energy requirements and electromagnetic signature are optimized. Moreover, the embedded Artificial Intelligence core enables these bio-inspired systems to recognize and learn new significant events. By duplicating human expertise in potentially hazardous places, our miniature visual event detector will allow early warning and contribute to better human decision making.

  10. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  11. Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raj, Sunny; Jha, Sumit Kumar; Pullum, Laura L.

    Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on themore » pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.« less

  12. Drone-Augmented Human Vision: Exocentric Control for Drones Exploring Hidden Areas.

    PubMed

    Erat, Okan; Isop, Werner Alexander; Kalkofen, Denis; Schmalstieg, Dieter

    2018-04-01

    Drones allow exploring dangerous or impassable areas safely from a distant point of view. However, flight control from an egocentric view in narrow or constrained environments can be challenging. Arguably, an exocentric view would afford a better overview and, thus, more intuitive flight control of the drone. Unfortunately, such an exocentric view is unavailable when exploring indoor environments. This paper investigates the potential of drone-augmented human vision, i.e., of exploring the environment and controlling the drone indirectly from an exocentric viewpoint. If used with a see-through display, this approach can simulate X-ray vision to provide a natural view into an otherwise occluded environment. The user's view is synthesized from a three-dimensional reconstruction of the indoor environment using image-based rendering. This user interface is designed to reduce the cognitive load of the drone's flight control. The user can concentrate on the exploration of the inaccessible space, while flight control is largely delegated to the drone's autopilot system. We assess our system with a first experiment showing how drone-augmented human vision supports spatial understanding and improves natural interaction with the drone.

  13. The role of vision processing in prosthetic vision.

    PubMed

    Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette

    2012-01-01

    Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.

  14. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  15. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  16. Different ranking of avian colors predicted by modeling of retinal function in humans and birds.

    PubMed

    Håstad, Olle; Odeen, Anders

    2008-06-01

    Abstract: Only during the past decade have vision-system-neutral methods become common practice in studies of animal color signals. Consequently, much of the current knowledge on sexual selection is based directly or indirectly on human vision, which may or may not emphasize spectral information in a signal differently from the intended receiver. In an attempt to quantify this discrepancy, we used retinal models to test whether human and bird vision rank plumage colors similarly. Of 67 species, human and bird models disagreed in 26 as to which pair of patches in the plumage provides the strongest color contrast or which male in a random pair is the more colorful. These results were only partly attributable to human UV blindness. Despite confirming a strong correlation between avian and human color discrimination, we conclude that a significant proportion of the information in avian visual signals may be lost in translation.

  17. Automatic decoding of facial movements reveals deceptive pain expressions

    PubMed Central

    Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang

    2014-01-01

    Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830

  18. Delay effects in the human sensory system during balancing.

    PubMed

    Stepan, Gabor

    2009-03-28

    Mechanical models of human self-balancing often use the Newtonian equations of inverted pendula. While these mathematical models are precise enough on the mechanical side, the ways humans balance themselves are still quite unexplored on the control side. Time delays in the sensory and motoric neural pathways give essential limitations to the stabilization of the human body as a multiple inverted pendulum. The sensory systems supporting each other provide the necessary signals for these control tasks; but the more complicated the system is, the larger delay is introduced. Human ageing as well as our actual physical and mental state affects the time delays in the neural system, and the mechanical structure of the human body also changes in a large range during our lives. The human balancing organ, the labyrinth, and the vision system essentially adapted to these relatively large time delays and parameter regions occurring during balancing. The analytical study of the simplified large-scale time-delayed models of balancing provides a Newtonian insight into the functioning of these organs that may also serve as a basis to support theories and hypotheses on balancing and vision.

  19. Sustainable and Autonomic Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Sterritt, Roy; Rouff, Christopher; Rash, James L.; Truszkowski, Walter

    2006-01-01

    Visions for future space exploration have long term science missions in sight, resulting in the need for sustainable missions. Survivability is a critical property of sustainable systems and may be addressed through autonomicity, an emerging paradigm for self-management of future computer-based systems based on inspiration from the human autonomic nervous system. This paper examines some of the ongoing research efforts to realize these survivable systems visions, with specific emphasis on developments in Autonomic Policies.

  20. Translating Vision into Design: A Method for Conceptual Design Development

    NASA Technical Reports Server (NTRS)

    Carpenter, Joyce E.

    2003-01-01

    One of the most challenging tasks for engineers is the definition of design solutions that will satisfy high-level strategic visions and objectives. Even more challenging is the need to demonstrate how a particular design solution supports the high-level vision. This paper describes a process and set of system engineering tools that have been used at the Johnson Space Center to analyze and decompose high-level objectives for future human missions into design requirements that can be used to develop alternative concepts for vehicles, habitats, and other systems. Analysis and design studies of alternative concepts and approaches are used to develop recommendations for strategic investments in research and technology that support the NASA Integrated Space Plan. In addition to a description of system engineering tools, this paper includes a discussion of collaborative design practices for human exploration mission architecture studies used at the Johnson Space Center.

  1. Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffith, Douglas; Greitzer, Frank L.

    Abstract--The purpose of this paper is to re-address the vision of human-computer symbiosis as originally expressed by J.C.R. Licklider nearly a half-century ago. We describe this vision, place it in some historical context relating to the evolution of human factors research, and we observe that the field is now in the process of re-invigorating Licklider’s vision. We briefly assess the state of the technology within the context of contemporary theory and practice, and we describe what we regard as this emerging field of neo-symbiosis. We offer some initial thoughts on requirements to define functionality of neo-symbiotic systems and discuss researchmore » challenges associated with their development and evaluation.« less

  2. Security Applications Of Computer Motion Detection

    NASA Astrophysics Data System (ADS)

    Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry

    1987-05-01

    An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.

  3. Smart unattended sensor networks with scene understanding capabilities

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2006-05-01

    Unattended sensor systems are new technologies that are supposed to provide enhanced situation awareness to military and law enforcement agencies. A network of such sensors cannot be very effective in field conditions only if it can transmit visual information to human operators or alert them on motion. In the real field conditions, events may happen in many nodes of a network simultaneously. But the real number of control personnel is always limited, and attention of human operators can be simply attracted to particular network nodes, while more dangerous threat may be unnoticed at the same time in the other nodes. Sensor networks would be more effective if equipped with a system that is similar to human vision in its abilities to understand visual information. Human vision uses for that a rough but wide peripheral system that tracks motions and regions of interests, narrow but precise foveal vision that analyzes and recognizes objects in the center of selected region of interest, and visual intelligence that provides scene and object contexts and resolves ambiguity and uncertainty in the visual information. Biologically-inspired Network-Symbolic models convert image information into an 'understandable' Network-Symbolic format, which is similar to relational knowledge models. The equivalent of interaction between peripheral and foveal systems in the network-symbolic system is achieved via interaction between Visual and Object Buffers and the top-level knowledge system.

  4. Advancing adverse outcome pathways for integrated toxicology and regulatory applications

    EPA Science Inventory

    Recent regulatory efforts in many countries have focused on a toxicological pathway-based vision for human health assessments relying on in vitro systems and predictive models to generate the toxicological data needed to evaluate chemical hazard. A pathway-based vision is equally...

  5. Shaping future Naval warfare with unmanned systems, the impact across the fleet, and joint considerations

    NASA Astrophysics Data System (ADS)

    Hudson, E. C.; Johnson, Gordon; Summey, Delbert C.; Portmann, Helmut H., Jr.

    2004-09-01

    This paper discusses a comprehensive vision for unmanned systems that will shape the future of Naval Warfare within a larger Joint Force concept, and examines the broad impact that can be anticipated across the Fleet. The vision has been articulated from a Naval perspective in NAVSEA technical report CSS/TR-01/09, Shaping the Future of Naval Warfare with Unmanned Systems, and from a Joint perspective in USJFCOM Rapid Assessment Process (RAP) Report #03-10 (Unmanned Effects (UFX): Taking the Human Out of the Loop). Here, the authors build on this foundation by reviewing the major findings and laying out the roadmap for achieving the vision and truly transforming how we fight wars. The focus is on broad impact across the Fleet - but the implications reach across all Joint forces. The term "Unmanned System" means different things to different people. Most think of vehicles that are remotely teleoperated that perform tasks under remote human control. Actually, unmanned systems are stand-alone systems that can execute missions and tasks without direct physical manned presence under varying levels of human control - from teleoperation to full autonomy. It is important to note that an unmanned system comprises a lot more than just a vehicle - it includes payloads, command and control, and communications and information processing.

  6. Objective evaluation of the visual acuity in human eyes

    NASA Astrophysics Data System (ADS)

    Rosales, M. A.; López-Olazagasti, E.; Ramírez-Zavaleta, G.; Varillas, G.; Tepichín, E.

    2009-08-01

    Traditionally, the quality of the human vision is evaluated by a subjective test in which the examiner asks the patient to read a series of characters of different sizes, located at a certain distance of the patient. Typically, we need to ensure a subtended angle of vision of 5 minutes, which implies an object of 8.8 mm high located at 6 meters (normal or 20/20 visual acuity). These characters constitute what is known as the Snellen chart, universally used to evaluate the spatial resolution of the human eyes. The mentioned process of identification of characters is carried out by means of the eye - brain system, giving an evaluation of the subjective visual performance. In this work we consider the eye as an isolated image-forming system, and show that it is possible to isolate the function of the eye from that of the brain in this process. By knowing the impulse response of the eye´s system we can obtain, in advance, the image of the Snellen chart simultaneously. From this information, we obtain the objective performance of the eye as the optical system under test. This type of results might help to detect anomalous situations of the human vision, like the so called "cerebral myopia".

  7. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  8. The Ontology of Vision. The Invisible, Consciousness of Living Matter

    PubMed Central

    Fiorio, Giorgia

    2016-01-01

    If I close my eyes, the absence of light activates the peripheral cells devoted to the perception of darkness. The awareness of “seeing oneself seeing” is in its essence a thought, one that is internal to the vision and previous to any object of sight. To this amphibious faculty, the “diaphanous color of darkness,” Aristotle assigns the principle of knowledge. “Vision is a whole perceptual system, not a channel of sense.” Functions of vision are interwoven with the texture of human interaction within a terrestrial environment that is in turn contained into the cosmic order. A transitive host within the resonance of an inner-outer environment, the human being is the contact-term between two orders of scale, both bigger and smaller than the individual unity. In the perceptual integrative system of human vision, the convergence-divergence of the corporeal presence and the diffraction of its own appearance is the margin. The sensation of being no longer coincides with the breath of life, it does not seems “real” without the trace of some visible evidence and its simultaneous “sharing”. Without a shadow, without an imprint, the numeric copia of the physical presence inhabits the transient memory of our electronic prostheses. A rudimentary “visuality” replaces tangible experience dissipating its meaning and the awareness of being alive. Transversal to the civilizations of the ancient world, through different orders of function and status, the anthropomorphic “figuration” of archaic sculpture addressees the margin between Being and Non-Being. Statuary human archetypes are not meant to be visible, but to exist as vehicles of transcendence to outlive the definition of human space-time. The awareness of individual finiteness seals the compulsion to “give body” to an invisible apparition shaping the figuration of an ontogenetic expression of human consciousness. Subject and object, the term “humanum” fathoms the relationship between matter and its living dimension, “this de facto vision and the ‘there is’ which it contains.” The project reconsiders the dialectic between the terms vision–presence in the contemporary perception of archaic human statuary according to the transcendent meaning of its immaterial legacy. PMID:27014106

  9. Studying the lower limit of human vision with a single-photon source

    NASA Astrophysics Data System (ADS)

    Holmes, Rebecca; Christensen, Bradley; Street, Whitney; Wang, Ranxiao; Kwiat, Paul

    2015-05-01

    Humans can detect a visual stimulus of just a few photons. Exactly how few is not known--psychological and physiological research have suggested that the detection threshold may be as low as one photon, but the question has never been directly tested. Using a source of heralded single photons based on spontaneous parametric downconversion, we can directly characterize the lower limit of vision. This system can also be used to study temporal and spatial integration in the visual system, and to study visual attention with EEG. We may eventually even be able to investigate how human observers perceive quantum effects such as superposition and entanglement. Our progress and some preliminary results will be discussed.

  10. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  11. Exploration EVA System

    NASA Technical Reports Server (NTRS)

    Kearney, Lara

    2004-01-01

    In January 2004, the President announced a new Vision for Space Exploration. NASA's Office of Exploration Systems has identified Extravehicular Activity (EVA) as a critical capability for supporting the Vision for Space Exploration. EVA is required for all phases of the Vision, both in-space and planetary. Supporting the human outside the protective environment of the vehicle or habitat and allow ing him/her to perform efficient and effective work requires an integrated EVA "System of systems." The EVA System includes EVA suits, airlocks, tools and mobility aids, and human rovers. At the core of the EVA System is the highly technical EVA suit, which is comprised mainly of a life support system and a pressure/environmental protection garment. The EVA suit, in essence, is a miniature spacecraft, which combines together many different sub-systems such as life support, power, communications, avionics, robotics, pressure systems and thermal systems, into a single autonomous unit. Development of a new EVA suit requires technology advancements similar to those required in the development of a new space vehicle. A majority of the technologies necessary to develop advanced EVA systems are currently at a low Technology Readiness Level of 1-3. This is particularly true for the long-pole technologies of the life support system.

  12. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  13. Development of machine-vision system for gap inspection of muskmelon grafted seedlings.

    PubMed

    Liu, Siyao; Xing, Zuochang; Wang, Zifan; Tian, Subo; Jahun, Falalu Rabiu

    2017-01-01

    Grafting robots have been developed in the world, but some auxiliary works such as gap-inspecting for grafted seedlings still need to be done by human. An machine-vision system of gap inspection for grafted muskmelon seedlings was developed in this study. The image acquiring system consists of a CCD camera, a lens and a front white lighting source. The image of inspected gap was processed and analyzed by software of HALCON 12.0. The recognition algorithm for the system is based on principle of deformable template matching. A template should be created from an image of qualified grafted seedling gap. Then the gap image of the grafted seedling will be compared with the created template to determine their matching degree. Based on the similarity between the gap image of grafted seedling and the template, the matching degree will be 0 to 1. The less similar for the grafted seedling gap with the template the smaller of matching degree. Thirdly, the gap will be output as qualified or unqualified. If the matching degree of grafted seedling gap and the template is less than 0.58, or there is no match is found, the gap will be judged as unqualified; otherwise the gap will be qualified. Finally, 100 muskmelon seedlings were grafted and inspected to test the gap inspection system. Results showed that the gap inspection machine-vision system could recognize the gap qualification correctly as 98% of human vision. And the inspection speed of this system can reach 15 seedlings·min-1. The gap inspection process in grafting can be fully automated with this developed machine-vision system, and the gap inspection system will be a key step of a fully-automatic grafting robots.

  14. Information-Theoretic Approach May Shed a Light to a Better Understanding and Sustaining the Integrity of Ecological-Societal Systems under Changing Climate

    NASA Astrophysics Data System (ADS)

    Kim, J.

    2016-12-01

    Considering high levels of uncertainty, epistemological conflicts over facts and values, and a sense of urgency, normal paradigm-driven science will be insufficient to mobilize people and nation toward sustainability. The conceptual framework to bridge the societal system dynamics with that of natural ecosystems in which humanity operates remains deficient. The key to understanding their coevolution is to understand `self-organization.' Information-theoretic approach may shed a light to provide a potential framework which enables not only to bridge human and nature but also to generate useful knowledge for understanding and sustaining the integrity of ecological-societal systems. How can information theory help understand the interface between ecological systems and social systems? How to delineate self-organizing processes and ensure them to fulfil sustainability? How to evaluate the flow of information from data through models to decision-makers? These are the core questions posed by sustainability science in which visioneering (i.e., the engineering of vision) is an essential framework. Yet, visioneering has neither quantitative measure nor information theoretic framework to work with and teach. This presentation is an attempt to accommodate the framework of self-organizing hierarchical open systems with visioneering into a common information-theoretic framework. A case study is presented with the UN/FAO's communal vision of climate-smart agriculture (CSA) which pursues a trilemma of efficiency, mitigation, and resilience. Challenges of delineating and facilitating self-organizing systems are discussed using transdisciplinary toold such as complex systems thinking, dynamic process network analysis and multi-agent systems modeling. Acknowledgments: This study was supported by the Korea Meteorological Administration Research and Development Program under Grant KMA-2012-0001-A (WISE project).

  15. Using advanced computer vision algorithms on small mobile robots

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.

    2006-05-01

    The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.

  16. Human vision is attuned to the diffuseness of natural light

    PubMed Central

    Morgenstern, Yaniv; Geisler, Wilson S.; Murray, Richard F.

    2014-01-01

    All images are highly ambiguous, and to perceive 3-D scenes, the human visual system relies on assumptions about what lighting conditions are most probable. Here we show that human observers' assumptions about lighting diffuseness are well matched to the diffuseness of lighting in real-world scenes. We use a novel multidirectional photometer to measure lighting in hundreds of environments, and we find that the diffuseness of natural lighting falls in the same range as previous psychophysical estimates of the visual system's assumptions about diffuseness. We also find that natural lighting is typically directional enough to override human observers' assumption that light comes from above. Furthermore, we find that, although human performance on some tasks is worse in diffuse light, this can be largely accounted for by intrinsic task difficulty. These findings suggest that human vision is attuned to the diffuseness levels of natural lighting conditions. PMID:25139864

  17. Visions of human futures in space and SETI

    NASA Astrophysics Data System (ADS)

    Wright, Jason T.; Oman-Reagan, Michael P.

    2018-04-01

    We discuss how visions for the futures of humanity in space and SETI are intertwined, and are shaped by prior work in the fields and by science fiction. This appears in the language used in the fields, and in the sometimes implicit assumptions made in discussions of them. We give examples from articulations of the so-called Fermi Paradox, discussions of the settlement of the Solar System (in the near future) and the Galaxy (in the far future), and METI. We argue that science fiction, especially the campy variety, is a significant contributor to the `giggle factor' that hinders serious discussion and funding for SETI and Solar System settlement projects. We argue that humanity's long-term future in space will be shaped by our short-term visions for who goes there and how. Because of the way they entered the fields, we recommend avoiding the term `colony' and its cognates when discussing the settlement of space, as well as other terms with similar pedigrees. We offer examples of science fiction and other writing that broaden and challenge our visions of human futures in space and SETI. In an appendix, we use an analogy with the well-funded and relatively uncontroversial searches for the dark matter particle to argue that SETI's lack of funding in the national science portfolio is primarily a problem of perception, not inherent merit.

  18. TOXICITY TESTING IN THE 21ST CENTURY: A VISION AND A STRATEGY

    PubMed Central

    Krewski, Daniel; Acosta, Daniel; Andersen, Melvin; Anderson, Henry; Bailar, John C.; Boekelheide, Kim; Brent, Robert; Charnley, Gail; Cheung, Vivian G.; Green, Sidney; Kelsey, Karl T.; Kerkvliet, Nancy I.; Li, Abby A.; McCray, Lawrence; Meyer, Otto; Patterson, Reid D.; Pennie, William; Scala, Robert A.; Solomon, Gina M.; Stephens, Martin; Yager, James; Zeise, Lauren

    2015-01-01

    With the release of the landmark report Toxicity Testing in the 21st Century: A Vision and a Strategy, the U.S. National Academy of Sciences, in 2007, precipitated a major change in the way toxicity testing is conducted. It envisions increased efficiency in toxicity testing and decreased animal usage by transitioning from current expensive and lengthy in vivo testing with qualitative endpoints to in vitro toxicity pathway assays on human cells or cell lines using robotic high-throughput screening with mechanistic quantitative parameters. Risk assessment in the exposed human population would focus on avoiding significant perturbations in these toxicity pathways. Computational systems biology models would be implemented to determine the dose-response models of perturbations of pathway function. Extrapolation of in vitro results to in vivo human blood and tissue concentrations would be based on pharmacokinetic models for the given exposure condition. This practice would enhance human relevance of test results, and would cover several test agents, compared to traditional toxicological testing strategies. As all the tools that are necessary to implement the vision are currently available or in an advanced stage of development, the key prerequisites to achieving this paradigm shift are a commitment to change in the scientific community, which could be facilitated by a broad discussion of the vision, and obtaining necessary resources to enhance current knowledge of pathway perturbations and pathway assays in humans and to implement computational systems biology models. Implementation of these strategies would result in a new toxicity testing paradigm firmly based on human biology. PMID:20574894

  19. Using Videos Derived from Simulations to Support the Analysis of Spatial Awareness in Synthetic Vision Displays

    NASA Technical Reports Server (NTRS)

    Boton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.

    2006-01-01

    The evaluation of human-centered systems can be performed using a variety of different methodologies. This paper describes a human-centered systems evaluation methodology where participants watch 5-second non-interactive videos of a system in operation before supplying judgments and subjective measures based on the information conveyed in the videos. This methodology was used to evaluate the ability of different textures and fields of view to convey spatial awareness in synthetic vision systems (SVS) displays. It produced significant results for both judgment based and subjective measures. This method is compared to other methods commonly used to evaluate SVS displays based on cost, the amount of experimental time required, experimental flexibility, and the type of data provided.

  20. Health system vision of iran in 2025.

    PubMed

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Vision statement in evolutionary plan of health system is considered to be :"a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region(1) and with the regarding to health in all policies, accountability and innovation". An explanatory context was compiled either to create a complete image of the vision. Social values and leaders' strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders' strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system.

  1. A color-coded vision scheme for robotics

    NASA Technical Reports Server (NTRS)

    Johnson, Kelley Tina

    1991-01-01

    Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.

  2. Machine vision method for online surface inspection of easy open can ends

    NASA Astrophysics Data System (ADS)

    Mariño, Perfecto; Pastoriza, Vicente; Santamaría, Miguel

    2006-10-01

    Easy open can end manufacturing process in the food canning sector currently makes use of a manual, non-destructive testing procedure to guarantee can end repair coating quality. This surface inspection is based on a visual inspection made by human inspectors. Due to the high production rate (100 to 500 ends per minute) only a small part of each lot is verified (statistical sampling), then an automatic, online, inspection system, based on machine vision, has been developed to improve this quality control. The inspection system uses a fuzzy model to make the acceptance/rejection decision for each can end from the information obtained by the vision sensor. In this work, the inspection method is presented. This surface inspection system checks the total production, classifies the ends in agreement with an expert human inspector, supplies interpretability to the operators in order to find out the failure causes and reduce mean time to repair during failures, and allows to modify the minimum can end repair coating quality.

  3. Playing to Your Strengths: Appreciative Inquiry in the Visioning Process

    ERIC Educational Resources Information Center

    Fifolt, Matthew; Stowe, Angela M.

    2011-01-01

    "Appreciative Inquiry" (AI) is a structured approach to visioning focused on reflection, introspection, and collaboration. Rooted in organizational behavior theory, AI was introduced in the early 1980s as a life-centric approach to human systems (Watkins and Mohr 2001). Since then, AI has been used widely within the business community;…

  4. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  5. 3D vision upgrade kit for the TALON robot system

    NASA Astrophysics Data System (ADS)

    Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott

    2010-02-01

    In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.

  6. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    PubMed Central

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  7. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    PubMed

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  8. Human Systems Integration at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    McCandless, Jeffrey

    2017-01-01

    The Human Systems Integration Division focuses on the design and operations of complex aerospace systems through analysis, experimentation and modeling. With over a dozen labs and over 120 people, the division conducts research to improve safety, efficiency and mission success. Areas of investigation include applied vision research which will be discussed during this seminar.

  9. Automated measurement of human body shape and curvature using computer vision

    NASA Astrophysics Data System (ADS)

    Pearson, Jeremy D.; Hobson, Clifford A.; Dangerfield, Peter H.

    1993-06-01

    A system to measure the surface shape of the human body has been constructed. The system uses a fringe pattern generated by projection of multi-stripe structured light. The optical methodology used is fully described and the algorithms used to process acquired digital images are outlined. The system has been applied to the measurement of the shape of the human back in scoliosis.

  10. Semiautonomous teleoperation system with vision guidance

    NASA Astrophysics Data System (ADS)

    Yu, Wai; Pretlove, John R. G.

    1998-12-01

    This paper describes the ongoing research work on developing a telerobotic system in Mechatronic Systems and Robotics Research group at the University of Surrey. As human operators' manual control of remote robots always suffer from reduced performance and difficulties in perceiving information from the remote site, a system with a certain level of intelligence and autonomy will help to solve some of these problems. Thus, this system has been developed for this purpose. It also serves as an experimental platform to test the idea of using the combination of human and computer intelligence in teleoperation and finding out the optimum balance between them. The system consists of a Polhemus- based input device, a computer vision sub-system and a graphical user interface which communicates the operator with the remote robot. The system description is given in this paper as well as the preliminary experimental results of the system evaluation.

  11. An early underwater artificial vision model in ocean investigations via independent component analysis.

    PubMed

    Nian, Rui; Liu, Fang; He, Bo

    2013-07-16

    Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs).

  12. An Early Underwater Artificial Vision Model in Ocean Investigations via Independent Component Analysis

    PubMed Central

    Nian, Rui; Liu, Fang; He, Bo

    2013-01-01

    Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs). PMID:23863855

  13. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

    PubMed

    Kriegeskorte, Nikolaus

    2015-11-24

    Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

  14. Computer vision in cell biology.

    PubMed

    Danuser, Gaudenz

    2011-11-23

    Computer vision refers to the theory and implementation of artificial systems that extract information from images to understand their content. Although computers are widely used by cell biologists for visualization and measurement, interpretation of image content, i.e., the selection of events worth observing and the definition of what they mean in terms of cellular mechanisms, is mostly left to human intuition. This Essay attempts to outline roles computer vision may play and should play in image-based studies of cellular life. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Liquid lens: advances in adaptive optics

    NASA Astrophysics Data System (ADS)

    Casey, Shawn Patrick

    2010-12-01

    'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.

  16. Design of an efficient framework for fast prototyping of customized human-computer interfaces and virtual environments for rehabilitation.

    PubMed

    Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe

    2013-06-01

    Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Bionic Vision-Based Intelligent Power Line Inspection System

    PubMed Central

    Ma, Yunpeng; He, Feijia; Xu, Jinxin

    2017-01-01

    Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions. PMID:28203269

  18. The Visual System of Zebrafish and its Use to Model Human Ocular Diseases

    PubMed Central

    Gestri, Gaia; Link, Brian A; Neuhauss, Stephan CF

    2011-01-01

    Free swimming zebrafish larvae depend mainly on their sense of vision to evade predation and to catch prey. Hence there is strong selective pressure on the fast maturation of visual function and indeed the visual system already supports a number of visually-driven behaviors in the newly hatched larvae. The ability to exploit the genetic and embryonic accessibility of the zebrafish in combination with a behavioral assessment of visual system function has made the zebrafish a popular model to study vision and its diseases. Here, we review the anatomy, physiology and development of the zebrafish eye as the basis to relate the contributions of the zebrafish to our understanding of human ocular diseases. PMID:21595048

  19. Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System

    PubMed Central

    Ajina, Sara; Bridge, Holly

    2017-01-01

    Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system. PMID:27777337

  20. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  1. Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review

    PubMed Central

    Ali, Asraf; Sundaraj, Kenneth; Ahmad, Badlishah; Ahamed, Nizam; Islam, Anamul

    2012-01-01

    Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders. PMID:22938548

  2. Health System Vision of Iran in 2025

    PubMed Central

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Background: Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. Method: After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Results: Vision statement in evolutionary plan of health system is considered to be :“a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region1 and with the regarding to health in all policies, accountability and innovation”. An explanatory context was compiled either to create a complete image of the vision. Conclusion: Social values and leaders’ strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders’ strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system. PMID:23865011

  3. Distant touch hydrodynamic imaging with an artificial lateral line.

    PubMed

    Yang, Yingchen; Chen, Jack; Engel, Jonathan; Pandya, Saunvit; Chen, Nannan; Tucker, Craig; Coombs, Sheryl; Jones, Douglas L; Liu, Chang

    2006-12-12

    Nearly all underwater vehicles and surface ships today use sonar and vision for imaging and navigation. However, sonar and vision systems face various limitations, e.g., sonar blind zones, dark or murky environments, etc. Evolved over millions of years, fish use the lateral line, a distributed linear array of flow sensing organs, for underwater hydrodynamic imaging and information extraction. We demonstrate here a proof-of-concept artificial lateral line system. It enables a distant touch hydrodynamic imaging capability to critically augment sonar and vision systems. We show that the artificial lateral line can successfully perform dipole source localization and hydrodynamic wake detection. The development of the artificial lateral line is aimed at fundamentally enhancing human ability to detect, navigate, and survive in the underwater environment.

  4. Data acquisition and analysis of range-finding systems for spacing construction

    NASA Technical Reports Server (NTRS)

    Shen, C. N.

    1981-01-01

    For space missions of future, completely autonomous robotic machines will be required to free astronauts from routine chores of equipment maintenance, servicing of faulty systems, etc. and to extend human capabilities in hazardous environments full of cosmic and other harmful radiations. In places of high radiation and uncontrollable ambient illuminations, T.V. camera based vision systems cannot work effectively. However, a vision system utilizing directly measured range information with a time of flight laser rangefinder, can successfully operate under these environments. Such a system will be independent of proper illumination conditions and the interfering effects of intense radiation of all kinds will be eliminated by the tuned input of the laser instrument. Processing the range data according to certain decision, stochastic estimation and heuristic schemes, the laser based vision system will recognize known objects and thus provide sufficient information to the robot's control system which can develop strategies for various objectives.

  5. Remote Safety Monitoring for Elderly Persons Based on Omni-Vision Analysis

    PubMed Central

    Xiang, Yun; Tang, Yi-ping; Ma, Bao-qing; Yan, Hang-chen; Jiang, Jun; Tian, Xu-yuan

    2015-01-01

    Remote monitoring service for elderly persons is important as the aged populations in most developed countries continue growing. To monitor the safety and health of the elderly population, we propose a novel omni-directional vision sensor based system, which can detect and track object motion, recognize human posture, and analyze human behavior automatically. In this work, we have made the following contributions: (1) we develop a remote safety monitoring system which can provide real-time and automatic health care for the elderly persons and (2) we design a novel motion history or energy images based algorithm for motion object tracking. Our system can accurately and efficiently collect, analyze, and transfer elderly activity information and provide health care in real-time. Experimental results show that our technique can improve the data analysis efficiency by 58.5% for object tracking. Moreover, for the human posture recognition application, the success rate can reach 98.6% on average. PMID:25978761

  6. An Automated Classification Technique for Detecting Defects in Battery Cells

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2006-01-01

    Battery cell defect classification is primarily done manually by a human conducting a visual inspection to determine if the battery cell is acceptable for a particular use or device. Human visual inspection is a time consuming task when compared to an inspection process conducted by a machine vision system. Human inspection is also subject to human error and fatigue over time. We present a machine vision technique that can be used to automatically identify defective sections of battery cells via a morphological feature-based classifier using an adaptive two-dimensional fast Fourier transformation technique. The initial area of interest is automatically classified as either an anode or cathode cell view as well as classified as an acceptable or a defective battery cell. Each battery cell is labeled and cataloged for comparison and analysis. The result is the implementation of an automated machine vision technique that provides a highly repeatable and reproducible method of identifying and quantifying defects in battery cells.

  7. NASA Strategic Roadmap Summary Report

    NASA Technical Reports Server (NTRS)

    Wilson, Scott; Bauer, Frank; Stetson, Doug; Robey, Judee; Smith, Eric P.; Capps, Rich; Gould, Dana; Tanner, Mike; Guerra, Lisa; Johnston, Gordon

    2005-01-01

    In response to the Vision, NASA commissioned strategic and capability roadmap teams to develop the pathways for turning the Vision into a reality. The strategic roadmaps were derived from the Vision for Space Exploration and the Aldrich Commission Report dated June 2004. NASA identified 12 strategic areas for roadmapping. The Agency added a thirteenth area on nuclear systems because the topic affects the entire program portfolio. To ensure long-term public visibility and engagement, NASA established a committee for each of the 13 areas. These committees - made up of prominent members of the scientific and aerospace industry communities and senior government personnel - worked under the Federal Advisory Committee Act. A committee was formed for each of the following program areas: 1) Robotic and Human Lunar Exploration; 2) Robotic and Human Exploration of Mars; 3) Solar System Exploration; 4) Search for Earth-Like Planets; 5) Exploration Transportation System; 6) International Space Station; 7) Space Shuttle; 8) Universe Exploration; 9) Earth Science and Applications from Space; 10) Sun-Solar System Connection; 11) Aeronautical Technologies; 12) Education; 13) Nuclear Systems. This document contains roadmap summaries for 10 of these 13 program areas; The International Space Station, Space Shuttle, and Education are excluded. The completed roadmaps for the following committees: Robotic and Human Exploration of Mars; Solar System Exploration; Search for Earth-Like Planets; Universe Exploration; Earth Science and Applications from Space; Sun-Solar System Connection are collected in a separate Strategic Roadmaps volume. This document contains memebership rosters and charters for all 13 committees.

  8. Background staining of visualization systems in immunohistochemistry: comparison of the Avidin-Biotin Complex system and the EnVision+ system.

    PubMed

    Vosse, Bettine A H; Seelentag, Walter; Bachmann, Astrid; Bosman, Fred T; Yan, Pu

    2007-03-01

    The aim of this study was to evaluate specific immunostaining and background staining in formalin-fixed, paraffin-embedded human tissues with the 2 most frequently used immunohistochemical detection systems, Avidin-Biotin-Peroxidase (ABC) and EnVision+. A series of fixed tissues, including breast, colon, kidney, larynx, liver, lung, ovary, pancreas, prostate, stomach, and tonsil, was used in the study. Three monoclonal antibodies, 1 against a nuclear antigen (Ki-67), 1 against a cytoplasmic antigen (cytokeratin), and 1 against a cytoplasmic and membrane-associated antigen and a polyclonal antibody against a nuclear and cytoplasmic antigen (S-100) were selected for these studies. When the ABC system was applied, immunostaining was performed with and without blocking of endogenous avidin-binding activity. The intensity of specific immunostaining and the percentage of stained cells were comparable for the 2 detection systems. The use of ABC caused widespread cytoplasmic and rare nuclear background staining in a variety of normal and tumor cells. A very strong background staining was observed in colon, gastric mucosa, liver, and kidney. Blocking avidin-binding capacity reduced background staining, but complete blocking was difficult to attain. With the EnVision+ system no background staining occurred. Given the efficiency of the detection, equal for both systems or higher with EnVision+, and the significant background problem with ABC, we advocate the routine use of the EnVision+ system.

  9. A digital retina-like low-level vision processor.

    PubMed

    Mertoguno, S; Bourbakis, N G

    2003-01-01

    This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.

  10. A Simplified Method of Identifying the Trained Retinal Locus for Training in Eccentric Viewing

    ERIC Educational Resources Information Center

    Vukicevic, Meri; Le, Anh; Baglin, James

    2012-01-01

    In the typical human visual system, the macula allows for high visual resolution. Damage to this area from diseases, such as age-related macular degeneration (AMD), causes the loss of central vision in the form of a central scotoma. Since no treatment is available to reverse AMD, providing low vision rehabilitation to compensate for the loss of…

  11. Computing Visible-Surface Representations,

    DTIC Science & Technology

    1985-03-01

    Terzopoulos N00014-75-C-0643 9. PERFORMING ORGANIZATION NAME AMC ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Artificial Inteligence Laboratory AREA A...Massachusetts Institute of lechnolog,. Support lbr the laboratory’s Artificial Intelligence research is provided in part by the Advanced Rtccarcl Proj...dynamically maintaining visible surface representations. Whether the intention is to model human vision or to design competent artificial vision systems

  12. Vision Algorithms Catch Defects in Screen Displays

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Andrew Watson, a senior scientist at Ames Research Center, developed a tool called the Spatial Standard Observer (SSO), which models human vision for use in robotic applications. Redmond, Washington-based Radiant Zemax LLC licensed the technology from NASA and combined it with its imaging colorimeter system, creating a powerful tool that high-volume manufacturers of flat-panel displays use to catch defects in screens.

  13. On the efficacy of cinema, or what the visual system did not evolve to do

    NASA Technical Reports Server (NTRS)

    Cutting, James E.

    1989-01-01

    Spatial displays, and a constraint that they do not place on the use of spatial instruments are discussed. Much of the work done in visual perception by psychologists and by computer scientists has concerned displays that show the motion of rigid objects. Typically, if one assumes that objects are rigid, one can then proceed to understand how the constant shape of the object can be perceived (or computed) as it moves through space. The author maintains that photographs and cinema are visual displays that are also powerful forms of art. Their efficacy, in part, stems from the fact that, although viewpoint is constrained when composing them, it is not nearly so constrained when viewing them. It is obvious, according to the author, that human visual systems did not evolve to watch movies or look at photographs. Thus, what photographs and movies present must be allowed in the rule-governed system under which vision evolved. Machine-vision algorithms, to be applicable to human vision, should show the same types of tolerance.

  14. Flight Testing of Night Vision Systems in Rotorcraft (Test en vol de systemes de vision nocturne a bord des aeronefs a voilure tournante)

    DTIC Science & Technology

    2007-07-01

    SAS System Analysis and Studies Panel • SCI Systems Concepts and Integration Panel • SET Sensors and Electronics Technology Panel These...Daylight Readability 4-2 4.1.4 Night-Time Readability 4-2 4.1.5 NVIS Radiance 4-2 4.1.6 Human Factors Analysis 4-3 4.1.7 Flight Tests 4-3 4.1.7.1...position is shadowing. Moonlight creates shadows during night-time just as sunlight does during the day. Understanding what cannot be seen in night-time

  15. Use of a vision model to quantify the significance of factors effecting target conspicuity

    NASA Astrophysics Data System (ADS)

    Gilmore, M. A.; Jones, C. K.; Haynes, A. W.; Tolhurst, D. J.; To, M.; Troscianko, T.; Lovell, P. G.; Parraga, C. A.; Pickavance, K.

    2006-05-01

    When designing camouflage it is important to understand how the human visual system processes the information to discriminate the target from the background scene. A vision model has been developed to compare two images and detect differences in local contrast in each spatial frequency channel. Observer experiments are being undertaken to validate this vision model so that the model can be used to quantify the relative significance of different factors affecting target conspicuity. Synthetic imagery can be used to design improved camouflage systems. The vision model is being used to compare different synthetic images to understand what features in the image are important to reproduce accurately and to identify the optimum way to render synthetic imagery for camouflage effectiveness assessment. This paper will describe the vision model and summarise the results obtained from the initial validation tests. The paper will also show how the model is being used to compare different synthetic images and discuss future work plans.

  16. A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context

    PubMed Central

    Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco

    2014-01-01

    Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services. PMID:24854209

  17. A vision-based system for intelligent monitoring: human behaviour analysis and privacy by context.

    PubMed

    Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco

    2014-05-20

    Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.

  18. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  19. Head pose estimation in computer vision: a survey.

    PubMed

    Murphy-Chutorian, Erik; Trivedi, Mohan Manubhai

    2009-04-01

    The capacity to estimate the head pose of another person is a common human ability that presents a unique challenge for computer vision systems. Compared to face detection and recognition, which have been the primary foci of face-related vision research, identity-invariant head pose estimation has fewer rigorously evaluated systems or generic solutions. In this paper, we discuss the inherent difficulties in head pose estimation and present an organized survey describing the evolution of the field. Our discussion focuses on the advantages and disadvantages of each approach and spans 90 of the most innovative and characteristic papers that have been published on this topic. We compare these systems by focusing on their ability to estimate coarse and fine head pose, highlighting approaches that are well suited for unconstrained environments.

  20. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  1. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studyingmore » the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.« less

  2. Exploration Requirements Development Utilizing the Strategy-to-Task-to-Technology Development Approach

    NASA Technical Reports Server (NTRS)

    Drake, Bret G.; Josten, B. Kent; Monell, Donald W.

    2004-01-01

    The Vision for Space Exploration provides direction for the National Aeronautics and Space Administration to embark on a robust space exploration program that will advance the Nation s scientific, security, and economic interests. This plan calls for a progressive expansion of human capabilities beyond low earth orbit seeking to answer profound scientific and philosophical questions while responding to discoveries along the way. In addition, the Vision articulates the strategy for developing the revolutionary new technologies and capabilities required for the future exploration of the solar system. The National Aeronautics and Space Administration faces new challenges in successfully implementing the Vision. In order to implement a sustained and affordable exploration endeavor it is vital for NASA to do business differently. This paper provides an overview of the strategy-to-task-to-technology process being used by NASA s Exploration Systems Mission Directorate to develop the requirements and system acquisition details necessary for implementing a sustainable exploration vision.

  3. Humans and Deep Networks Largely Agree on Which Kinds of Variation Make Object Recognition Harder.

    PubMed

    Kheradpisheh, Saeed R; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée

    2016-01-01

    View-invariant object recognition is a challenging problem that has attracted much attention among the psychology, neuroscience, and computer vision communities. Humans are notoriously good at it, even if some variations are presumably more difficult to handle than others (e.g., 3D rotations). Humans are thought to solve the problem through hierarchical processing along the ventral stream, which progressively extracts more and more invariant visual features. This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best models for object recognition in natural images. Here, for the first time, we systematically compared human feed-forward vision and DCNNs at view-invariant object recognition task using the same set of images and controlling the kinds of transformation (position, scale, rotation in plane, and rotation in depth) as well as their magnitude, which we call "variation level." We used four object categories: car, ship, motorcycle, and animal. In total, 89 human subjects participated in 10 experiments in which they had to discriminate between two or four categories after rapid presentation with backward masking. We also tested two recent DCNNs (proposed respectively by Hinton's group and Zisserman's group) on the same tasks. We found that humans and DCNNs largely agreed on the relative difficulties of each kind of variation: rotation in depth is by far the hardest transformation to handle, followed by scale, then rotation in plane, and finally position (much easier). This suggests that DCNNs would be reasonable models of human feed-forward vision. In addition, our results show that the variation levels in rotation in depth and scale strongly modulate both humans' and DCNNs' recognition performances. We thus argue that these variations should be controlled in the image datasets used in vision research.

  4. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve.

    PubMed

    Li, Yi; Chen, Yuren

    2016-12-30

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.

  5. Hybrid vision activities at NASA Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1990-01-01

    NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.

  6. Sensor Control of Robot Arc Welding

    NASA Technical Reports Server (NTRS)

    Sias, F. R., Jr.

    1983-01-01

    The potential for using computer vision as sensory feedback for robot gas-tungsten arc welding is investigated. The basic parameters that must be controlled while directing the movement of an arc welding torch are defined. The actions of a human welder are examined to aid in determining the sensory information that would permit a robot to make reproducible high strength welds. Special constraints imposed by both robot hardware and software are considered. Several sensory modalities that would potentially improve weld quality are examined. Special emphasis is directed to the use of computer vision for controlling gas-tungsten arc welding. Vendors of available automated seam tracking arc welding systems and of computer vision systems are surveyed. An assessment is made of the state of the art and the problems that must be solved in order to apply computer vision to robot controlled arc welding on the Space Shuttle Main Engine.

  7. Recognizing Materials using Perceptually Inspired Features

    PubMed Central

    Sharan, Lavanya; Liu, Ce; Rosenholtz, Ruth; Adelson, Edward H.

    2013-01-01

    Our world consists not only of objects and scenes but also of materials of various kinds. Being able to recognize the materials that surround us (e.g., plastic, glass, concrete) is important for humans as well as for computer vision systems. Unfortunately, materials have received little attention in the visual recognition literature, and very few computer vision systems have been designed specifically to recognize materials. In this paper, we present a system for recognizing material categories from single images. We propose a set of low and mid-level image features that are based on studies of human material recognition, and we combine these features using an SVM classifier. Our system outperforms a state-of-the-art system [Varma and Zisserman, 2009] on a challenging database of real-world material categories [Sharan et al., 2009]. When the performance of our system is compared directly to that of human observers, humans outperform our system quite easily. However, when we account for the local nature of our image features and the surface properties they measure (e.g., color, texture, local shape), our system rivals human performance. We suggest that future progress in material recognition will come from: (1) a deeper understanding of the role of non-local surface properties (e.g., extended highlights, object identity); and (2) efforts to model such non-local surface properties in images. PMID:23914070

  8. Control of gaze in natural environments: effects of rewards and costs, uncertainty and memory in target selection.

    PubMed

    Hayhoe, Mary M; Matthis, Jonathan Samir

    2018-08-06

    The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.

  9. Progress in building a cognitive vision system

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Yue, Hong

    2016-05-01

    We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.

  10. Clinically Normal Stereopsis Does Not Ensure Performance Benefit from Stereoscopic 3D Depth Cues

    DTIC Science & Technology

    2014-10-28

    Stereopsis, Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 16...Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 1 Distribution A: Approved

  11. INTRODUCTION TO THE MOVEMENT SYSTEM AS THE FOUNDATION FOR PHYSICAL THERAPIST PRACTICE EDUCATION AND RESEARCH.

    PubMed

    Saladin, Lisa; Voight, Michael

    2017-11-01

    In 2013, the American Physical Therapy Association (APTA) adopted an inspiring new vision, "Transforming society by optimizing movement to improve the human experience." This new vision for our profession calls us to action as physical therapists to transform society by using our skills, knowledge, and expertise related to the movement system in order to optimize movement, promote health and wellness, mitigate the progression of impairments, and prevent the development of (additional) disability. The guiding principle of the new vision is "identity," which can be summarized as "The physical therapy profession will define and promote the movement system as the foundation for optimizing movement to improve the health of society." Recognition and validation of the movement system is essential to understand the structure, function, and potential of the human body. As currently defined, the "movement system" represents the collection of systems (cardiovascular, pulmonary, endocrine, integumentary, nervous, and musculoskeletal) that interact to move the body or its component parts. By better characterizing physical therapists as movement system experts, we seek to solidify our professional identity within the medical community and society. The physical therapist will be responsible for evaluating and managing an individual's movement system across the lifespan to promote optimal development; diagnose impairments, activity limitations, and participation restrictions; and provide interventions targeted at preventing or ameliorating activity limitations and participation restrictions. 5.

  12. The economics of motion perception and invariants of visual sensitivity.

    PubMed

    Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael

    2007-06-21

    Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.

  13. Draper Laboratory small autonomous aerial vehicle

    NASA Astrophysics Data System (ADS)

    DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.

    1997-06-01

    The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.

  14. Three-dimensional displays and stereo vision

    PubMed Central

    Westheimer, Gerald

    2011-01-01

    Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes. PMID:21490023

  15. Review On Applications Of Neural Network To Computer Vision

    NASA Astrophysics Data System (ADS)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  16. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    PubMed Central

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  17. Using Vision System Technologies to Enable Operational Improvements for Low Visibility Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.

    2014-01-01

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  18. Colour vision experimental studies in teaching of optometry

    NASA Astrophysics Data System (ADS)

    Ozolinsh, Maris; Ikaunieks, Gatis; Fomins, Sergejs

    2005-10-01

    Following aspects related to human colour vision are included in experimental lessons for optometry students of University of Latvia. Characteristics of coloured stimuli (emitting and reflective), determination their coordinates in different colour spaces. Objective characteristics of transmitting of colour stimuli through the optical system of eye together with various types of appliances (lenses, prisms, Fresnel prisms). Psychophysical determination of mono- and polychromatic stimuli perception taking into account physiology of eye, retinal colour photoreceptor topography and spectral sensitivity, spatial and temporal characteristics of retinal receptive fields. Ergonomics of visual perception, influence of illumination and glare effects, testing of colour vision deficiencies.

  19. A customized vision system for tracking humans wearing reflective safety clothing from industrial vehicles and machinery.

    PubMed

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J

    2014-09-26

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.

  20. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    PubMed Central

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  1. Episodic Reasoning for Vision-Based Human Action Recognition

    PubMed Central

    Martinez-del-Rincon, Jesus

    2014-01-01

    Smart Spaces, Ambient Intelligence, and Ambient Assisted Living are environmental paradigms that strongly depend on their capability to recognize human actions. While most solutions rest on sensor value interpretations and video analysis applications, few have realized the importance of incorporating common-sense capabilities to support the recognition process. Unfortunately, human action recognition cannot be successfully accomplished by only analyzing body postures. On the contrary, this task should be supported by profound knowledge of human agency nature and its tight connection to the reasons and motivations that explain it. The combination of this knowledge and the knowledge about how the world works is essential for recognizing and understanding human actions without committing common-senseless mistakes. This work demonstrates the impact that episodic reasoning has in improving the accuracy of a computer vision system for human action recognition. This work also presents formalization, implementation, and evaluation details of the knowledge model that supports the episodic reasoning. PMID:24959602

  2. Humans and machines in space: The vision, the challenge, the payoff; Proceedings of the 29th Goddard Memorial Symposium, Washington, Mar. 14, 15, 1991

    NASA Astrophysics Data System (ADS)

    Johnson, Bradley; May, Gayle L.; Korn, Paula

    The present conference discusses the currently envisioned goals of human-machine systems in spacecraft environments, prospects for human exploration of the solar system, and plausible methods for meeting human needs in space. Also discussed are the problems of human-machine interaction in long-duration space flights, remote medical systems for space exploration, the use of virtual reality for planetary exploration, the alliance between U.S. Antarctic and space programs, and the economic and educational impacts of the U.S. space program.

  3. Health Maintenance System (HMS) CMO - Fundoscope

    NASA Image and Video Library

    2015-04-09

    ISS043E099306 (04/09/2015) --- NASA astronauts Terry Virts (bottom) and Scott Kelly (top) are seen here inside the Destiny Laboratory performing eye exams as part of ongoing studies into crew vision health. Vision changes in astronauts spending long periods of time in microgravity is a critical health issue that scientists are looking to solve as humanity prepares to travel to destinations far outside our planet like an asteroid and Mars.

  4. Robot and Human Surface Operations on Solar System Bodies

    NASA Technical Reports Server (NTRS)

    Weisbin, C. R.; Easter, R.; Rodriguez, G.

    2001-01-01

    This paper presents a comparison of robot and human surface operations on solar system bodies. The topics include: 1) Long Range Vision of Surface Scenarios; 2) Human and Robots Complement Each Other; 3) Respective Human and Robot Strengths; 4) Need More In-Depth Quantitative Analysis; 5) Projected Study Objectives; 6) Analysis Process Summary; 7) Mission Scenarios Decompose into Primitive Tasks; 7) Features of the Projected Analysis Approach; and 8) The "Getting There Effect" is a Major Consideration. This paper is in viewgraph form.

  5. Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffith, Douglas; Greitzer, Frank L.

    We re-address the vision of human-computer symbiosis expressed by J. C. R. Licklider nearly a half-century ago, when he wrote: “The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” (Licklider, 1960). Unfortunately, little progress was made toward this vision over four decades following Licklider’s challenge, despite significant advancements in the fields of human factors and computer science. Licklider’s vision wasmore » largely forgotten. However, recent advances in information science and technology, psychology, and neuroscience have rekindled the potential of making the Licklider’s vision a reality. This paper provides a historical context for and updates the vision, and it argues that such a vision is needed as a unifying framework for advancing IS&T.« less

  6. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.

  7. Stupid Tutoring Systems, Intelligent Humans

    ERIC Educational Resources Information Center

    Baker, Ryan S.

    2016-01-01

    The initial vision for intelligent tutoring systems involved powerful, multi-faceted systems that would leverage rich models of students and pedagogies to create complex learning interactions. But the intelligent tutoring systems used at scale today are much simpler. In this article, I present hypotheses on the factors underlying this development,…

  8. Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources.

    PubMed

    Liu, Yu-Ting; Pal, Nikhil R; Marathe, Amar R; Wang, Yu-Kai; Lin, Chin-Teng

    2017-01-01

    A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems.

  9. Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources

    PubMed Central

    Liu, Yu-Ting; Pal, Nikhil R.; Marathe, Amar R.; Wang, Yu-Kai; Lin, Chin-Teng

    2017-01-01

    A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems. PMID:28676734

  10. The contribution of stereo vision to the control of braking.

    PubMed

    Tijtgat, Pieter; Mazyn, Liesbeth; De Laey, Christophe; Lenoir, Matthieu

    2008-03-01

    In this study the contribution of stereo vision to the control of braking in front of a stationary target vehicle was investigated. Participants with normal (StereoN) and weak (StereoW) stereo vision drove a go-cart along a linear track towards a stationary vehicle. They could start braking from a distance of 4, 7, or 10m from the vehicle. Deceleration patterns were measured by means of a laser. A lack of stereo vision was associated with an earlier onset of braking, but the duration of the braking manoeuvre was similar. During the deceleration, the time of peak deceleration occurred earlier in drivers with weak stereo vision. Stopping distance was greater in those lacking in stereo vision. A lack of stereo vision was associated with a more prudent brake behaviour, in which the driver took into account a larger safety margin. This compensation might be caused either by an unconscious adaptation of the human perceptuo-motor system, or by a systematic underestimation of distance remaining due to the lack of stereo vision. In general, a lack of stereo vision did not seem to increase the risk of rear-end collisions.

  11. 2006 NASA Strategic Plan

    NASA Technical Reports Server (NTRS)

    2006-01-01

    On January 14, 2004, President George W. Bush announced A Renewed Spirit of Discovery: The President's Vision for U.S. Space Exploration, a new directive for the Nation's space program. The fundamental goal of this directive is "to advance U.S. scientific, security, and economic interests through a robust space exploration program." In issuing it, the President committed the Nation to a journey of exploring the solar system and beyond: returning to the Moon in the next decade, then venturing further into the solar system, ultimately sending humans to Mars and beyond. He challenged NASA to establish new and innovative programs to enhance understanding of the planets, to ask new questions, and to answer questions that are as old as humankind. NASA enthusiastically embraced the challenge of extending a human presence throughout the solar system as the Agency's Vision, and in the NASA Authorization Act of 2005, Congress endorsed the Vision for Space Exploration and provided additional guidance for implementation. NASA is committed to achieving this Vision and to making all changes necessary to ensure success and a smooth transition. These changes will include increasing internal collaboration, leveraging personnel and facilities, developing strong, healthy NASA Centers,a nd fostering a safe environment of respect and open communication for employees at all levels. NASA also will ensure clear accountability and solid program management and reporting practices. Over the next 10 years, NASA will focus on six Strategic Goals to move forward in achieving the Vision for Space Exploration. Each of the six Strategic Goals is clearly defined and supported by multi-year outcomes that will enhance NASA's ability to measure and report Agency accomplishments in this quest.

  12. Humanoids for lunar and planetary surface operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Keymeulen, Didier; Csaszar, Ambrus; Gan, Quan; Hidalgo, Timothy; Moore, Jeff; Newton, Jason; Sandoval, Steven; Xu, Jiajing

    2005-01-01

    This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans, for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project in this direction is outlined.

  13. Man-machine interactive imaging and data processing using high-speed digital mass storage

    NASA Technical Reports Server (NTRS)

    Alsberg, H.; Nathan, R.

    1975-01-01

    The role of vision in teleoperation has been recognized as an important element in the man-machine control loop. In most applications of remote manipulation, direct vision cannot be used. To overcome this handicap, the human operator's control capabilities are augmented by a television system. This medium provides a practical and useful link between workspace and the control station from which the operator perform his tasks. Human performance deteriorates when the images are degraded as a result of instrumental and transmission limitations. Image enhancement is used to bring out selected qualities in a picture to increase the perception of the observer. A general purpose digital computer, an extensive special purpose software system is used to perform an almost unlimited repertoire of processing operations.

  14. Conceptual Design Standards for eXternal Visibility System (XVS) Sensor and Display Resolution

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Wilz, Susan J.; Arthur, Jarvis J, III

    2012-01-01

    NASA is investigating eXternal Visibility Systems (XVS) concepts which are a combination of sensor and display technologies designed to achieve an equivalent level of safety and performance to that provided by forward-facing windows in today s subsonic aircraft. This report provides the background for conceptual XVS design standards for display and sensor resolution. XVS resolution requirements were derived from the basis of equivalent performance. Three measures were investigated: a) human vision performance; b) see-and-avoid performance and safety; and c) see-to-follow performance. From these three factors, a minimum but perhaps not sufficient resolution requirement of 60 pixels per degree was shown for human vision equivalence. However, see-and-avoid and see-to-follow performance requirements are nearly double. This report also reviewed historical XVS testing.

  15. Human Factors Engineering as a System in the Vision for Exploration

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Smith, Danielle; Holden, Kritina

    2006-01-01

    In order to accomplish NASA's Vision for Exploration, while assuring crew safety and productivity, human performance issues must be well integrated into system design from mission conception. To that end, a two-year Technology Development Project (TDP) was funded by NASA Headquarters to develop a systematic method for including the human as a system in NASA's Vision for Exploration. The specific goals of this project are to review current Human Systems Integration (HSI) standards (i.e., industry, military, NASA) and tailor them to selected NASA Exploration activities. Once the methods are proven in the selected domains, a plan will be developed to expand the effort to a wider scope of Exploration activities. The methods will be documented for inclusion in NASA-specific documents (such as the Human Systems Integration Standards, NASA-STD-3000) to be used in future space systems. The current project builds on a previous TDP dealing with Human Factors Engineering processes. That project identified the key phases of the current NASA design lifecycle, and outlined the recommended HFE activities that should be incorporated at each phase. The project also resulted in a prototype of a webbased HFE process tool that could be used to support an ideal HFE development process at NASA. This will help to augment the limited human factors resources available by providing a web-based tool that explains the importance of human factors, teaches a recommended process, and then provides the instructions, templates and examples to carry out the process steps. The HFE activities identified by the previous TDP are being tested in situ for the current effort through support to a specific NASA Exploration activity. Currently, HFE personnel are working with systems engineering personnel to identify HSI impacts for lunar exploration by facilitating the generation of systemlevel Concepts of Operations (ConOps). For example, medical operations scenarios have been generated for lunar habitation in order to identify HSI requirements for the lunar communications architecture. Throughout these ConOps exercises, HFE personnel are testing various tools and methodologies that have been identified in the literature. A key part of the effort is the identification of optimal processes, methods, and tools for these early development phase activities, such as ConOps, requirements development, and early conceptual design. An overview of the activities completed thus far, as well as the tools and methods investigated will be presented.

  16. Present and future of vision systems technologies in commercial flight operations

    NASA Astrophysics Data System (ADS)

    Ward, Jim

    2016-05-01

    The development of systems to enable pilots of all types of aircraft to see through fog, clouds, and sandstorms and land in low visibility has been widely discussed and researched across aviation. For military applications, the goal has been to operate in a Degraded Visual Environment (DVE), using sensors to enable flight crews to see and operate without concern to weather that limits human visibility. These military DVE goals are mainly oriented to the off-field landing environment. For commercial aviation, the Federal Aviation Agency (FAA) implemented operational regulations in 2004 that allow the flight crew to see the runway environment using an Enhanced Flight Vision Systems (EFVS) and continue the approach below the normal landing decision height. The FAA is expanding the current use and economic benefit of EFVS technology and will soon permit landing without any natural vision using real-time weather-penetrating sensors. The operational goals of both of these efforts, DVE and EFVS, have been the stimulus for development of new sensors and vision displays to create the modern flight deck.

  17. The Pixhawk Open-Source Computer Vision Framework for Mavs

    NASA Astrophysics Data System (ADS)

    Meier, L.; Tanskanen, P.; Fraundorfer, F.; Pollefeys, M.

    2011-09-01

    Unmanned aerial vehicles (UAV) and micro air vehicles (MAV) are already intensively used in geodetic applications. State of the art autonomous systems are however geared towards the application area in safe and obstacle-free altitudes greater than 30 meters. Applications at lower altitudes still require a human pilot. A new application field will be the reconstruction of structures and buildings, including the facades and roofs, with semi-autonomous MAVs. Ongoing research in the MAV robotics field is focusing on enabling this system class to operate at lower altitudes in proximity to nearby obstacles and humans. PIXHAWK is an open source and open hardware toolkit for this purpose. The quadrotor design is optimized for onboard computer vision and can connect up to four cameras to its onboard computer. The validity of the system design is shown with a fully autonomous capture flight along a building.

  18. Image processing for a tactile/vision substitution system using digital CNN.

    PubMed

    Lin, Chien-Nan; Yu, Sung-Nien; Hu, Jin-Cheng

    2006-01-01

    In view of the parallel processing and easy implementation properties of CNN, we propose to use digital CNN as the image processor of a tactile/vision substitution system (TVSS). The digital CNN processor is used to execute the wavelet down-sampling filtering and the half-toning operations, aiming to extract important features from the images. A template combination method is used to embed the two image processing functions into a single CNN processor. The digital CNN processor is implemented on an intellectual property (IP) and is implemented on a XILINX VIRTEX II 2000 FPGA board. Experiments are designated to test the capability of the CNN processor in the recognition of characters and human subjects in different environments. The experiments demonstrates impressive results, which proves the proposed digital CNN processor a powerful component in the design of efficient tactile/vision substitution systems for the visually impaired people.

  19. Human Detection from a Mobile Robot Using Fusion of Laser and Vision Information

    PubMed Central

    Fotiadis, Efstathios P.; Garzón, Mario; Barrientos, Antonio

    2013-01-01

    This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method. PMID:24008280

  20. Human detection from a mobile robot using fusion of laser and vision information.

    PubMed

    Fotiadis, Efstathios P; Garzón, Mario; Barrientos, Antonio

    2013-09-04

    This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method.

  1. Interdisciplinary multisensory fusion: design lessons from professional architects

    NASA Astrophysics Data System (ADS)

    Geiger, Ray W.; Snell, J. T.

    1992-11-01

    Psychocybernetic systems engineering design conceptualization is mimicking the evolutionary path of habitable environmental design and the professional practice of building architecture, construction, and facilities management. Human efficacy for innovation in architectural design has always reflected more the projected perceptual vision of the designer visa vis the hierarchical spirit of the design process. In pursuing better ways to build and/or design things, we have found surprising success in exploring certain more esoteric applications. One of those applications is the vision of an artistic approach in/and around creative problem solving. Our evaluation in research into vision and visual systems associated with environmental design and human factors has led us to discover very specific connections between the human spirit and quality design. We would like to share those very qualitative and quantitative parameters of engineering design, particularly as it relates to multi-faceted and interdisciplinary design practice. Discussion will cover areas of cognitive ergonomics, natural modeling sources, and an open architectural process of means and goal satisfaction, qualified by natural repetition, gradation, rhythm, contrast, balance, and integrity of process. One hypothesis is that the kinematic simulation of perceived connections between hard and soft sciences, centering on the life sciences and life in general, has become a very effective foundation for design theory and application.

  2. Object recognition based on Google's reverse image search and image similarity

    NASA Astrophysics Data System (ADS)

    Horváth, András.

    2015-12-01

    Image classification is one of the most challenging tasks in computer vision and a general multiclass classifier could solve many different tasks in image processing. Classification is usually done by shallow learning for predefined objects, which is a difficult task and very different from human vision, which is based on continuous learning of object classes and one requires years to learn a large taxonomy of objects which are not disjunct nor independent. In this paper I present a system based on Google image similarity algorithm and Google image database, which can classify a large set of different objects in a human like manner, identifying related classes and taxonomies.

  3. Development of embedded real-time and high-speed vision platform

    NASA Astrophysics Data System (ADS)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  4. Human performance models for computer-aided engineering

    NASA Technical Reports Server (NTRS)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  5. Diffractive-optical correlators: chances to make optical image preprocessing as intelligent as human vision

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    2004-10-01

    The human eye is a good model for the engineering of optical correlators. Three prominent intelligent functionalities in human vision could in the near future become realized by a new diffractive-optical hardware design of optical imaging sensors: (1) Illuminant-adaptive RGB-based color Vision, (2) Monocular 3D Vision based on RGB data processing, (3) Patchwise fourier-optical Object-Classification and Identification. The hardware design of the human eye has specific diffractive-optical elements (DOE's) in aperture and in image space and seems to execute the three jobs at -- or not far behind -- the loci of the images of objects.

  6. 77 FR 16890 - Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-22

    ... Committee 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April 17-19...

  7. 78 FR 5557 - Twenty-First Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-25

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held...

  8. 77 FR 56254 - Twentieth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-12

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October 2-4...

  9. Comparison of diagnosis of early retinal lesions of diabetic retinopathy between a computer system and human experts.

    PubMed

    Lee, S C; Lee, E T; Kingsley, R M; Wang, Y; Russell, D; Klein, R; Warn, A

    2001-04-01

    To investigate whether a computer vision system is comparable with humans in detecting early retinal lesions of diabetic retinopathy using color fundus photographs. A computer system has been developed using image processing and pattern recognition techniques to detect early lesions of diabetic retinopathy (hemorrhages and microaneurysms, hard exudates, and cotton-wool spots). Color fundus photographs obtained from American Indians in Oklahoma were used in developing and testing the system. A set of 369 color fundus slides were used to train the computer system using 3 diagnostic categories: lesions present, questionable, or absent (Y/Q/N). A different set of 428 slides were used to test and evaluate the system, and its diagnostic results were compared with those of 2 human experts-the grader at the University of Wisconsin Fundus Photograph Reading Center (Madison) and a general ophthalmologist. The experiments included comparisons using 3 (Y/Q/N) and 2 diagnostic categories (Y/N) (questionable cases excluded in the latter). In the training phase, the agreement rates, sensitivity, and specificity in detecting the 3 lesions between the retinal specialist and the computer system were all above 90%. The kappa statistics were high (0.75-0.97), indicating excellent agreement between the specialist and the computer system. In the testing phase, the results obtained between the computer system and human experts were consistent with those of the training phase, and they were comparable with those between the human experts. The performance of the computer vision system in diagnosing early retinal lesions was comparable with that of human experts. Therefore, this mobile, electronically easily accessible, and noninvasive computer system, could become a mass screening tool and a clinical aid in diagnosing early lesions of diabetic retinopathy.

  10. Digital tripwire: a small automated human detection system

    NASA Astrophysics Data System (ADS)

    Fischer, Amber D.; Redd, Emmett; Younger, A. Steven

    2009-05-01

    A low cost, lightweight, easily deployable imaging sensor that can dependably discriminate threats from other activities within its field of view and, only then, alert the distant duty officer by transmitting a visual confirmation of the threat would provide a valuable asset to modern defense. At present, current solutions suffer from a multitude of deficiencies - size, cost, power endurance, but most notably, an inability to assess an image and conclude that it contains a threat. The human attention span cannot maintain critical surveillance over banks of displays constantly conveying such images from the field. DigitalTripwire is a small, self-contained, automated human-detection system capable of running for 1-5 days on two AA batteries. To achieve such long endurance, the DigitalTripwire system utilizes an FPGA designed with sleep functionality. The system uses robust vision algorithms, such as a partially unsupervised innovative backgroundmodeling algorithm, which employ several data reduction strategies to operate in real-time, and achieve high detection rates. When it detects human activity, either mounted or dismounted, it sends an alert including images to notify the command center. In this paper, we describe the hardware and software design of the DigitalTripwire system. In addition, we provide detection and false alarm rates across several challenging data sets demonstrating the performance of the vision algorithms in autonomously analyzing the video stream and classifying moving objects into four primary categories - dismounted human, vehicle, non-human, or unknown. Performance results across several challenging data sets are provided.

  11. Adaptive Optics for the Human Eye

    NASA Astrophysics Data System (ADS)

    Williams, D. R.

    2000-05-01

    Adaptive optics can extend not only the resolution of ground-based telescopes, but also the human eye. Both static and dynamic aberrations in the cornea and lens of the normal eye limit its optical quality. Though it is possible to correct defocus and astigmatism with spectacle lenses, higher order aberrations remain. These aberrations blur vision and prevent us from seeing at the fundamental limits set by the retina and brain. They also limit the resolution of cameras to image the living retina, cameras that are a critical for the diagnosis and treatment of retinal disease. I will describe an adaptive optics system that measures the wave aberration of the eye in real time and compensates for it with a deformable mirror, endowing the human eye with unprecedented optical quality. This instrument provides fresh insight into the ultimate limits on human visual acuity, reveals for the first time images of the retinal cone mosaic responsible for color vision, and points the way to contact lenses and laser surgical methods that could enhance vision beyond what is currently possible today. Supported by the NSF Science and Technology Center for Adaptive Optics, the National Eye Institute, and Bausch and Lomb, Inc.

  12. Course for undergraduate students: analysis of the retinal image quality of a human eye model

    NASA Astrophysics Data System (ADS)

    del Mar Pérez, Maria; Yebra, Ana; Fernández-Oliveras, Alicia; Ghinea, Razvan; Ionescu, Ana M.; Cardona, Juan C.

    2014-07-01

    In teaching of Vision Physics or Physiological Optics, the knowledge and analysis of the aberration that the human eye presents are of great interest, since this information allows a proper evaluation of the quality of the retinal image. The objective of the present work is that the students acquire the required competencies which will allow them to evaluate the optical quality of the human visual system for emmetropic and ammetropic eye, both with and without the optical compensation. For this purpose, an optical system corresponding to the Navarro-Escudero eye model, which allows calculating and evaluating the aberration of this eye model in different ammetropic conditions, was developed employing the OSLO LT software. The optical quality of the visual system will be assessed through determinations of the third and fifth order aberration coefficients, the impact diagram, wavefront analysis, calculation of the Point Spread Function and the Modulation Transfer Function for ammetropic individuals, with myopia or hyperopia, both with or without the optical compensation. This course is expected to be of great interest for student of Optics and Optometry Sciences, last courses of Physics or medical sciences related with human vision.

  13. Advanced integrated enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha

    2003-09-01

    In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

  14. Sharpening vision by adapting to flicker.

    PubMed

    Arnold, Derek H; Williams, Jeremy D; Phipps, Natasha E; Goodale, Melvyn A

    2016-11-01

    Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision-allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because "blur" signals are mitigated.

  15. Analysis of light emitting diode array lighting system based on human vision: normal and abnormal uniformity condition.

    PubMed

    Qin, Zong; Ji, Chuangang; Wang, Kai; Liu, Sheng

    2012-10-08

    In this paper, condition for uniform lighting generated by light emitting diode (LED) array was systematically studied. To take human vision effect into consideration, contrast sensitivity function (CSF) was novelly adopted as critical criterion for uniform lighting instead of conventionally used Sparrow's Criterion (SC). Through CSF method, design parameters including system thickness, LED pitch, LED's spatial radiation distribution and viewing condition can be analytically combined. In a specific LED array lighting system (LALS) with foursquare LED arrangement, different types of LEDs (Lambertian and Batwing type) and given viewing condition, optimum system thicknesses and LED pitches were calculated and compared with those got through SC method. Results show that CSF method can achieve more appropriate optimum parameters than SC method. Additionally, an abnormal phenomenon that uniformity varies with structural parameters non-monotonically in LALS with non-Lambertian LEDs was found and analyzed. Based on the analysis, a design method of LALS that can bring about better practicability, lower cost and more attractive appearance was summarized.

  16. Looking back, looking forward and aiming higher: the next generation's visions of the next 50 years in space

    NASA Astrophysics Data System (ADS)

    Thakore, B.; Frierson, T.; Coderre, K.; Lukaszczyk, A.; Karl, A.

    2009-04-01

    This paper outlines the response of students and young space professionals on the occasion of the 50th Anniversary of the first artificial satellite and the 40th anniversary of the Outer Space Treaty. The contribution has been coordinated by the Space Generation Advisory Council (SGAC) in support of the United Nations Programme on Space Applications. It follows consultation of the SGAC community through a series of meetings, online discussions and online surveys. The first two online surveys collected over 750 different visions from the international community, totaling approximately 276 youth from over 28 countries and builds on previous SGAC policy contributions. A summary of these results was presented as the top 10 visions of today's youth as an invited input to world space leaders gathered at the Symposium on "The future of space exploration: Solutions to earthly problems" held in Boston, USA from April 12-14 2007 and at the United Nations Committee on the Peaceful Uses of Outer Space in May 2007. These key visions suggested the enhancement for humanity's reach beyond this planet - both physically and intellectual. These key visions were themed into three main categories: • Improvement of Human Survival Probability - sustained exploration to become a multi- planet species, humans to Mars, new treaty structures to ensure a secure space environment, etc • Improvement of Human Quality of Life and the Environment - new political systems or astrocracy, benefits of tele-medicine, tele-education, and commercialization of space, new energy and resources: space solar power, etc. • Improvement of Human Knowledge and Understanding - complete survey of extinct and extant life forms, use of space data for advanced environmental monitoring, etc. This paper will summarize the outcomes from a further online survey and represent key recommendations given by international youth advocates on further steps that could be taken by space agencies and organizations to make the top 10 visions a reality. In turn the online discussions that are used to engage the youth audience would be recorded and would help to reflect the confidence of the younger generation in these visions. The categories listed above would also be investigated further from the technology, policy and ethical aspects. Recent activities in development to further disseminate the necessary connections between using of space technology for solving global challenges is discussed.

  17. Machine vision process monitoring on a poultry processing kill line: results from an implementation

    NASA Astrophysics Data System (ADS)

    Usher, Colin; Britton, Dougl; Daley, Wayne; Stewart, John

    2005-11-01

    Researchers at the Georgia Tech Research Institute designed a vision inspection system for poultry kill line sorting with the potential for process control at various points throughout a processing facility. This system has been successfully operating in a plant for over two and a half years and has been shown to provide multiple benefits. With the introduction of HACCP-Based Inspection Models (HIMP), the opportunity for automated inspection systems to emerge as viable alternatives to human screening is promising. As more plants move to HIMP, these systems have the great potential for augmenting a processing facilities visual inspection process. This will help to maintain a more consistent and potentially higher throughput while helping the plant remain within the HIMP performance standards. In recent years, several vision systems have been designed to analyze the exterior of a chicken and are capable of identifying Food Safety 1 (FS1) type defects under HIMP regulatory specifications. This means that a reliable vision system can be used in a processing facility as a carcass sorter to automatically detect and divert product that is not suitable for further processing. This improves the evisceration line efficiency by creating a smaller set of features that human screeners are required to identify. This can reduce the required number of screeners or allow for faster processing line speeds. In addition to identifying FS1 category defects, the Georgia Tech vision system can also identify multiple "Other Consumer Protection" (OCP) category defects such as skin tears, bruises, broken wings, and cadavers. Monitoring this data in an almost real-time system allows the processing facility to address anomalies as soon as they occur. The Georgia Tech vision system can record minute-by-minute averages of the following defects: Septicemia Toxemia, cadaver, over-scald, bruises, skin tears, and broken wings. In addition to these defects, the system also records the length and width information of the entire chicken and different parts such as the breast, the legs, the wings, and the neck. The system also records average color and miss- hung birds, which can cause problems in further processing. Other relevant production information is also recorded including truck arrival and offloading times, catching crew and flock serviceman data, the grower, the breed of chicken, and the number of dead-on- arrival (DOA) birds per truck. Several interesting observations from the Georgia Tech vision system, which has been installed in a poultry processing plant for several years, are presented. Trend analysis has been performed on the performance of the catching crews and flock serviceman, and the results of the processed chicken as they relate to the bird dimensions and equipment settings in the plant. The results have allowed researchers and plant personnel to identify potential areas for improvement in the processing operation, which should result in improved efficiency and yield.

  18. Humanoids in Support of Lunar and Planetary Surface Operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Keymeulen, Didier

    2006-01-01

    This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project using small-scale Fujitsu HOAP-2 humanoid is outlined.

  19. Challenges towards realization of health care sector goals of Tanzania development vision 2025: training and deployment of graduate human resource for health.

    PubMed

    Siril, Nathanael; Kiwara, Angwara; Simba, Daud

    2013-06-01

    Human resource for health (HRH) is an essential building block for effective and efficient health care system. In Tanzania this component is faced by many challenges which in synergy with others make the health care system inefficient. In vision 2025 the country recognizes the importance of the health care sector in attaining quality livelihood for its citizens. The vision is in its 13th year since its launch. Given the central role of HRH in attainment of this vision, how the HRH is trained and deployed deserves a deeper understanding. To analyze the factors affecting training and deployment process of graduate level HRH of three core cadres; Medical Doctors, Doctor of Dental Surgery and Bachelor of Pharmacy towards realization of development vision 2025. Explorative study design in five training institutions for health and Ministry of Health and Social Welfare (MoHSW) headquarters utilizing in-depth interviews, observations and review of available documents methodology. The training Institutions which are cornerstone for HRH training are understaffed, underfunded (donor dependent), have low admitting capacities and lack co-ordination with other key stakeholders dealing with health. The deployment of graduate level HRH is affected by; limited budget, decision on deployment handled by another ministry rather than MoHSW, competition between health care sector and other sectors and lack of co-ordination between employer, trainers and other key health care sector stakeholders. Awareness on vision 2025 is low in the training institutions. For the vision 2025 health care sector goals to be realized well devised strategies on raising its awareness in the training institutions is recommended. Quality livelihood as stated in vision 2025 will be a forgotten dream if the challenges facing the training and deployment of graduate level HRH will not be addressed timely. It is the authors' view that reduction of donor dependency syndrome, extension of retirement age for academic Staffs in the training institutions for health and synergizing the training and deployment of the graduate level HRH can be among the initial strategies towards addressing these challenges.

  20. Explore with Us

    NASA Technical Reports Server (NTRS)

    Morales, Lester

    2012-01-01

    The fundamental goal of this vision is to advance U.S. scientific, security and economic interest through a robust space exploration program. Implement a sustained and affordable human and robotic program to explore the solar system and beyond. Extend human presence across the solar system, starting with a human return to the Moon by the year 2020, in preparation for human exploration of Mars and other destinations. Develop the innovative technologies, knowledge, and infrastructures both to explore and to support decisions about the destinations for human exploration. Promote international and commercial participation in exploration to further U.S. scientific, security, and economic interests.

  1. 78 FR 16756 - Twenty-Second Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...

  2. 78 FR 55774 - Twenty Fourth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-11

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October...

  3. 75 FR 17202 - Eighth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-05

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...

  4. 75 FR 44306 - Eleventh Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-28

    ... Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The...

  5. 75 FR 71183 - Twelfth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY...: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will...

  6. Doing the Humanities: The Use of Undergraduate Classroom Humanities Research Projects.

    ERIC Educational Resources Information Center

    Geib, George W.

    "American Visions" is a freshman-level survey course offered by the Department of History as part of Butler University's core curriculum. The course is built around three primary contextual considerations: high culture, popular culture, and community culture. The high culture approach is designed to introduce students to major systems of thought…

  7. Scanning System -- Technology Worth a Look

    Treesearch

    Philip A. Araman; Daniel L. Schmoldt; Richard W. Conners; D. Earl Kline

    1995-01-01

    In an effort to help automate the inspection for lumber defects, optical scanning systems are emerging as an alternative to the human eye. Although still in its infancy, scanning technology is being explored by machine companies and universities. This article was excerpted from "Machine Vision Systems for Grading and Processing Hardwood Lumber," by Philip...

  8. High-dynamic-range scene compression in humans

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2006-02-01

    Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.

  9. Population Spotting Using Big Data: Validating the Human Performance Concept of Operations Analytic Vision

    DTIC Science & Technology

    2017-01-01

    AFRL-SA-WP-SR-2017-0001 Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision...TITLE AND SUBTITLE Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision 5a. CONTRACT...STINFO COPY NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any

  10. Human low vision image warping - Channel matching considerations

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.; Smith, Alan T.; Loshin, David S.

    1992-01-01

    We are investigating the possibility that a video image may productively be warped prior to presentation to a low vision patient. This could form part of a prosthesis for certain field defects. We have done preliminary quantitative studies on some notions that may be valid in calculating the image warpings. We hope the results will help make best use of time to be spent with human subjects, by guiding the selection of parameters and their range to be investigated. We liken a warping optimization to opening the largest number of spatial channels between the pixels of an input imager and resolution cells in the visual system. Some important effects are not quantified that will require human evaluation, such as local 'squashing' of the image, taken as the ratio of eigenvalues of the Jacobian of the transformation. The results indicate that the method shows quantitative promise. These results have identified some geometric transformations to evaluate further with human subjects.

  11. Do you see what I see? The difference between dog and human visual perception may affect the outcome of experiments.

    PubMed

    Pongrácz, Péter; Ujvári, Vera; Faragó, Tamás; Miklósi, Ádám; Péter, András

    2017-07-01

    The visual sense of dogs is in many aspects different than that of humans. Unfortunately, authors do not explicitly take into consideration dog-human differences in visual perception when designing their experiments. With an image manipulation program we altered stationary images, according to the present knowledge about dog-vision. Besides the effect of dogs' dichromatic vision, the software shows the effect of the lower visual acuity and brightness discrimination, too. Fifty adult humans were tested with pictures showing a female experimenter pointing, gazing or glancing to the left or right side. Half of the pictures were shown after they were altered to a setting that approximated dog vision. Participants had difficulty to find out the direction of glancing when the pictures were in dog-vision mode. Glances in dog-vision setting were followed less correctly and with a slower response time than other cues. Our results are the first that show the visual performance of humans under circumstances that model how dogs' weaker vision would affect their responses in an ethological experiment. We urge researchers to take into consideration the differences between perceptual abilities of dogs and humans, by developing visual stimuli that fit more appropriately to dogs' visual capabilities. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. 76 FR 11847 - Thirteenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-03

    ... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...

  13. 76 FR 20437 - Fourteenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-12

    ... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...

  14. Current Technologies and its Trends of Machine Vision in the Field of Security and Disaster Prevention

    NASA Astrophysics Data System (ADS)

    Hashimoto, Manabu; Fujino, Yozo

    Image sensing technologies are expected as useful and effective way to suppress damages by criminals and disasters in highly safe and relieved society. In this paper, we describe current important subjects, required functions, technical trends, and a couple of real examples of developed system. As for the video surveillance, recognition of human trajectory and human behavior using image processing techniques are introduced with real examples about the violence detection for elevators. In the field of facility monitoring technologies as civil engineering, useful machine vision applications such as automatic detection of concrete cracks on walls of a building or recognition of crowded people on bridge for effective guidance in emergency are shown.

  15. Analysis and implementation of the foveated vision of the raptor eye

    NASA Astrophysics Data System (ADS)

    Long, Aaron D.; Narayanan, Ram M.; Kane, Timothy J.; Rice, Terence F.; Tauber, Michael J.

    2016-05-01

    A foveated optical system has non-uniform resolution across its field of view. Typically, the resolution of such a lens is peaked in the center region of field of view, such as in the human eye. In biological systems this is often a result of localized depressions on the retina called foveae. Birds of prey, or raptors, have two foveae in each eye, each of which accounts for a localized region of high magnification within the raptor's field of view. This paper presents an analysis of the bifoveated vision of raptors and presents a method whereby this unique optical characteristic may be achieved in an optical system using freeform optics and aberration correction techniques.

  16. Simulation Based Acquisition for NASA's Office of Exploration Systems

    NASA Technical Reports Server (NTRS)

    Hale, Joe

    2004-01-01

    In January 2004, President George W. Bush unveiled his vision for NASA to advance U.S. scientific, security, and economic interests through a robust space exploration program. This vision includes the goal to extend human presence across the solar system, starting with a human return to the Moon no later than 2020, in preparation for human exploration of Mars and other destinations. In response to this vision, NASA has created the Office of Exploration Systems (OExS) to develop the innovative technologies, knowledge, and infrastructures to explore and support decisions about human exploration destinations, including the development of a new Crew Exploration Vehicle (CEV). Within the OExS organization, NASA is implementing Simulation Based Acquisition (SBA), a robust Modeling & Simulation (M&S) environment integrated across all acquisition phases and programs/teams, to make the realization of the President s vision more certain. Executed properly, SBA will foster better informed, timelier, and more defensible decisions throughout the acquisition life cycle. By doing so, SBA will improve the quality of NASA systems and speed their development, at less cost and risk than would otherwise be the case. SBA is a comprehensive, Enterprise-wide endeavor that necessitates an evolved culture, a revised spiral acquisition process, and an infrastructure of advanced Information Technology (IT) capabilities. SBA encompasses all project phases (from requirements analysis and concept formulation through design, manufacture, training, and operations), professional disciplines, and activities that can benefit from employing SBA capabilities. SBA capabilities include: developing and assessing system concepts and designs; planning manufacturing, assembly, transport, and launch; training crews, maintainers, launch personnel, and controllers; planning and monitoring missions; responding to emergencies by evaluating effects and exploring solutions; and communicating across the OExS enterprise, within the Government, and with the general public. The SBA process features empowered collaborative teams (including industry partners) to integrate requirements, acquisition, training, operations, and sustainment. The SBA process also utilizes an increased reliance on and investment in M&S to reduce design risk. SBA originated as a joint Industry and Department of Defense (DoD) initiative to define and integrate an acquisition process that employs robust, collaborative use of M&S technology across acquisition phases and programs. The SBA process was successfully implemented in the Air Force s Joint Strike Fighter (JSF) Program.

  17. New design environment for defect detection in web inspection systems

    NASA Astrophysics Data System (ADS)

    Hajimowlana, S. Hossain; Muscedere, Roberto; Jullien, Graham A.; Roberts, James W.

    1997-09-01

    One of the aims of industrial machine vision is to develop computer and electronic systems destined to replace human vision in the process of quality control of industrial production. In this paper we discuss the development of a new design environment developed for real-time defect detection using reconfigurable FPGA and DSP processor mounted inside a DALSA programmable CCD camera. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The system is targeted for web inspection but has the potential for broader application areas. We describe and show test results of the prototype system board, mounted inside a DALSA camera and discuss some of the algorithms currently simulated and implemented for web inspection applications.

  18. Malathion.

    ERIC Educational Resources Information Center

    Brenner, Loretta

    1992-01-01

    Discusses research findings about malathion, a widely used insecticide, concerning potential for human exposure; how malathion works and is used; toxicity; carcinogenicity; mutagenicity; associated birth defects; reproductive effects; effects on vision, diet, behavior, and immune systems; contaminants and analogues, synergists, residues, inert…

  19. Proceedings of the international conference on cybernetics and societ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-01-01

    This book presents the papers given at a conference on artificial intelligence, expert systems and knowledge bases. Topics considered at the conference included automating expert system development, modeling expert systems, causal maps, data covariances, robot vision, image processing, multiprocessors, parallel processing, VLSI structures, man-machine systems, human factors engineering, cognitive decision analysis, natural language, computerized control systems, and cybernetics.

  20. Compact Microscope Imaging System with Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    The figure presents selected views of a compact microscope imaging system (CMIS) that includes a miniature video microscope, a Cartesian robot (a computer- controlled three-dimensional translation stage), and machine-vision and control subsystems. The CMIS was built from commercial off-the-shelf instrumentation, computer hardware and software, and custom machine-vision software. The machine-vision and control subsystems include adaptive neural networks that afford a measure of artificial intelligence. The CMIS can perform several automated tasks with accuracy and repeatability . tasks that, heretofore, have required the full attention of human technicians using relatively bulky conventional microscopes. In addition, the automation and control capabilities of the system inherently include a capability for remote control. Unlike human technicians, the CMIS is not at risk of becoming fatigued or distracted: theoretically, it can perform continuously at the level of the best human technicians. In its capabilities for remote control and for relieving human technicians of tedious routine tasks, the CMIS is expected to be especially useful in biomedical research, materials science, inspection of parts on industrial production lines, and space science. The CMIS can automatically focus on and scan a microscope sample, find areas of interest, record the resulting images, and analyze images from multiple samples simultaneously. Automatic focusing is an iterative process: The translation stage is used to move the microscope along its optical axis in a succession of coarse, medium, and fine steps. A fast Fourier transform (FFT) of the image is computed at each step, and the FFT is analyzed for its spatial-frequency content. The microscope position that results in the greatest dispersal of FFT content toward high spatial frequencies (indicating that the image shows the greatest amount of detail) is deemed to be the focal position.

  1. Color Vision and Hue Categorization in Young Human Infants

    ERIC Educational Resources Information Center

    Bornstein, Marc H.; And Others

    1976-01-01

    The main objective of the present investigations was to determine whether or not young human infants see the physical spectrum in a categorical fashion as human adults and animals who possess color vision regularly do. (Author)

  2. Helmet-mounted pilot night vision systems: Human factors issues

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.; Brickner, Michael S.

    1989-01-01

    Helmet-mounted displays of infrared imagery (forward-looking infrared (FLIR)) allow helicopter pilots to perform low level missions at night and in low visibility. However, pilots experience high visual and cognitive workload during these missions, and their performance capabilities may be reduced. Human factors problems inherent in existing systems stem from three primary sources: the nature of thermal imagery; the characteristics of specific FLIR systems; and the difficulty of using FLIR system for flying and/or visually acquiring and tracking objects in the environment. The pilot night vision system (PNVS) in the Apache AH-64 provides a monochrome, 30 by 40 deg helmet-mounted display of infrared imagery. Thermal imagery is inferior to television imagery in both resolution and contrast ratio. Gray shades represent temperatures differences rather than brightness variability, and images undergo significant changes over time. The limited field of view, displacement of the sensor from the pilot's eye position, and monocular presentation of a bright FLIR image (while the other eye remains dark-adapted) are all potential sources of disorientation, limitations in depth and distance estimation, sensations of apparent motion, and difficulties in target and obstacle detection. Insufficient information about human perceptual and performance limitations restrains the ability of human factors specialists to provide significantly improved specifications, training programs, or alternative designs. Additional research is required to determine the most critical problem areas and to propose solutions that consider the human as well as the development of technology.

  3. An egocentric vision based assistive co-robot.

    PubMed

    Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang

    2013-06-01

    We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.

  4. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  5. A pseudoisochromatic test of color vision for human infants.

    PubMed

    Mercer, Michele E; Drodge, Suzanne C; Courage, Mary L; Adams, Russell J

    2014-07-01

    Despite the development of experimental methods capable of measuring early human color vision, we still lack a procedure comparable to those used to diagnose the well-identified congenital and acquired color vision anomalies in older children, adults, and clinical patients. In this study, we modified a pseudoisochromatic test to make it more suitable for young infants. Using a forced choice preferential looking procedure, 216 3-to-23-mo-old babies were tested with pseudoisochromatic targets that fell on either a red/green or a blue/yellow dichromatic confusion axis. For comparison, 220 color-normal adults and 22 color-deficient adults were also tested. Results showed that all babies and adults passed the blue/yellow target but many of the younger infants failed the red/green target, likely due to the interaction of the lingering immaturities within the visual system and the small CIE vector distance within the red/green plate. However, older (17-23 mo) infants, color- normal adults and color-defective adults all performed according to expectation. Interestingly, performance on the red/green plate was better among female infants, well exceeding the expected rate of genetic dimorphism between genders. Overall, with some further modification, the test serves as a promising tool for the detection of early color vision anomalies in early human life. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Camera systems in human motion analysis for biomedical applications

    NASA Astrophysics Data System (ADS)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  7. Spatial Resolution, Grayscale, and Error Diffusion Trade-offs: Impact on Display System Design

    NASA Technical Reports Server (NTRS)

    Gille, Jennifer L. (Principal Investigator)

    1996-01-01

    We examine technology trade-offs related to grayscale resolution, spatial resolution, and error diffusion for tessellated display systems. We present new empirical results from our psychophysical study of these trade-offs and compare them to the predictions of a model of human vision.

  8. Are visual peripheries forever young?

    PubMed

    Burnat, Kalina

    2015-01-01

    The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.

  9. Effects of light on brain and behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brainard, G.C.

    1994-12-31

    It is obvious that light entering the eye permits the sensory capacity of vision. The human species is highly dependent on visual perception of the environment and consequently, the scientific study of vision and visual mechanisms is a centuries old endeavor. Relatively new discoveries are now leading to an expanded understanding of the role of light entering the eye - in addition to supporting vision, light has various nonvisual biological effects. Over the past thirty years, animal studies have shown that environmental light is the primary stimulus for regulating circadian rhythms, seasonal cycles, and neuroendocrine responses. As with all photobiologicalmore » phenomena, the wavelength, intensity, timing and duration of a light stimulus is important in determining its regulatory influence on the circadian and neuroendocrine systems. Initially, the effects of light on rhythms and hormones were observed only in sub-human species. Research over the past decade, however, has confirmed that light entering the eyes of humans is a potent stimulus for controlling physiological rhythms. The aim of this paper is to examine three specific nonvisual responses in humans which are mediated by light entering the eye: light-induced melatonin suppression, light therapy for winter depression, and enhancement of nighttime performance. This will serve as a brief introduction to the growing database which demonstrates how light stimuli can influence physiology, mood and behavior in humans. Such information greatly expands our understanding of the human eye and will ultimately change our use of light in the human environment.« less

  10. Effects of light on brain and behavior

    NASA Technical Reports Server (NTRS)

    Brainard, George C.

    1994-01-01

    It is obvious that light entering the eye permits the sensory capacity of vision. The human species is highly dependent on visual perception of the environment and consequently, the scientific study of vision and visual mechanisms is a centuries old endeavor. Relatively new discoveries are now leading to an expanded understanding of the role of light entering the eye in addition to supporting vision, light has various nonvisual biological effects. Over the past thirty years, animal studies have shown that environmental light is the primary stimulus for regulating circadian rhythms, seasonal cycles, and neuroendocrine responses. As with all photobiological phenomena, the wavelength, intensity, timing and duration of a light stimulus is important in determining its regulatory influence on the circadian and neuroendocrine systems. Initially, the effects of light on rhythms and hormones were observed only in sub-human species. Research over the past decade, however, has confirmed that light entering the eyes of humans is a potent stimulus for controlling physiological rhythms. The aim of this paper is to examine three specific nonvisual responses in humans which are mediated by light entering the eye: light-induced melatonin suppression, light therapy for winter depression, and enhancement of nighttime performance. This will serve as a brief introduction to the growing database which demonstrates how light stimuli can influence physiology, mood and behavior in humans. Such information greatly expands our understanding of the human eye and will ultimately change our use of light in the human environment.

  11. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review.

    PubMed

    Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F

    2016-03-05

    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.

  12. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review

    PubMed Central

    Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F.

    2016-01-01

    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works. PMID:26959030

  13. Smart Distributed Sensor Fields: Algorithms for Tactical Sensors

    DTIC Science & Technology

    2013-12-23

    ranging from detecting, identifying, localizing/tracking interesting events, discarding irrelevant data, to providing actionable intelligence currently...tracking interesting events, discarding irrelevant data, to providing actionable intelligence currently requires significant human super- vision. Human...view of the overall system. The main idea is to reduce the problem to the relevant data, and then reason intelligently over that data. This process

  14. A multiscale Markov random field model in wavelet domain for image segmentation

    NASA Astrophysics Data System (ADS)

    Dai, Peng; Cheng, Yu; Wang, Shengchun; Du, Xinyu; Wu, Dan

    2017-07-01

    The human vision system has abilities for feature detection, learning and selective attention with some properties of hierarchy and bidirectional connection in the form of neural population. In this paper, a multiscale Markov random field model in the wavelet domain is proposed by mimicking some image processing functions of vision system. For an input scene, our model provides its sparse representations using wavelet transforms and extracts its topological organization using MRF. In addition, the hierarchy property of vision system is simulated using a pyramid framework in our model. There are two information flows in our model, i.e., a bottom-up procedure to extract input features and a top-down procedure to provide feedback controls. The two procedures are controlled simply by two pyramidal parameters, and some Gestalt laws are also integrated implicitly. Equipped with such biological inspired properties, our model can be used to accomplish different image segmentation tasks, such as edge detection and region segmentation.

  15. 3D visual mechinism by neural networkings

    NASA Astrophysics Data System (ADS)

    Sugiyama, Shigeki

    2007-04-01

    There are some computer vision systems that are available on a market but those are quite far from a real usage of our daily life in a sense of security guard or in a sense of a usage of recognition of a target object behaviour. Because those surroundings' sensing might need to recognize a detail description of an object, like "the distance to an object" and "an object detail figure" and "its figure of edging", which are not possible to have a clear picture of the mechanisms of them with the present recognition system. So for doing this, here studies on mechanisms of how a pair of human eyes can recognize a distance apart, an object edging, and an object in order to get basic essences of vision mechanisms. And those basic mechanisms of object recognition are simplified and are extended logically for applying to a computer vision system. Some of the results of these studies are introduced on this paper.

  16. Airborne sensors for detecting large marine debris at sea.

    PubMed

    Veenstra, Timothy S; Churnside, James H

    2012-01-01

    The human eye is an excellent, general-purpose airborne sensor for detecting marine debris larger than 10 cm on or near the surface of the water. Coupled with the human brain, it can adjust for light conditions and sea-surface roughness, track persistence, differentiate color and texture, detect change in movement, and combine all of the available information to detect and identify marine debris. Matching this performance with computers and sensors is difficult at best. However, there are distinct advantages over the human eye and brain that sensors and computers can offer such as the ability to use finer spectral resolution, to work outside the spectral range of human vision, to control the illumination, to process the information in ways unavailable to the human vision system, to provide a more objective and reproducible result, to operate from unmanned aircraft, and to provide a permanent record that can be used for later analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. SMALL COLOUR VISION VARIATIONS AND THEIR EFFECT IN VISUAL COLORIMETRY,

    DTIC Science & Technology

    COLOR VISION, PERFORMANCE(HUMAN), TEST EQUIPMENT, PERFORMANCE(HUMAN), CORRELATION TECHNIQUES, STATISTICAL PROCESSES, COLORS, ANALYSIS OF VARIANCE, AGING(MATERIALS), COLORIMETRY , BRIGHTNESS, ANOMALIES, PLASTICS, UNITED KINGDOM.

  18. Adaptive design lessons from professional architects

    NASA Astrophysics Data System (ADS)

    Geiger, Ray W.; Snell, J. T.

    1993-09-01

    Psychocybernetic systems engineering design conceptualization is mimicking the evolutionary path of habitable environmental design and the professional practice of building architecture, construction, and facilities management. In pursuing better ways to design cellular automata and qualification classifiers in a design process, we have found surprising success in exploring certain more esoteric approaches, e.g., the vision of interdisciplinary artistic discovery in and around creative problem solving. Our evaluation in research into vision and hybrid sensory systems associated with environmental design and human factors has led us to discover very specific connections between the human spirit and quality design. We would like to share those very qualitative and quantitative parameters of engineering design, particularly as it relates to multi-faceted and future oriented design practice. Discussion covers areas of case- based techniques of cognitive ergonomics, natural modeling sources, and an open architectural process of means/goal satisfaction, qualified by natural repetition, gradation, rhythm, contrast, balance, and integrity of process.

  19. Object and Facial Recognition in Augmented and Virtual Reality: Investigation into Software, Hardware and Potential Uses

    NASA Technical Reports Server (NTRS)

    Schulte, Erin

    2017-01-01

    As augmented and virtual reality grows in popularity, and more researchers focus on its development, other fields of technology have grown in the hopes of integrating with the up-and-coming hardware currently on the market. Namely, there has been a focus on how to make an intuitive, hands-free human-computer interaction (HCI) utilizing AR and VR that allows users to control their technology with little to no physical interaction with hardware. Computer vision, which is utilized in devices such as the Microsoft Kinect, webcams and other similar hardware has shown potential in assisting with the development of a HCI system that requires next to no human interaction with computing hardware and software. Object and facial recognition are two subsets of computer vision, both of which can be applied to HCI systems in the fields of medicine, security, industrial development and other similar areas.

  20. Computer vision-based classification of hand grip variations in neurorehabilitation.

    PubMed

    Zariffa, José; Steeves, John D

    2011-01-01

    The complexity of hand function is such that most existing upper limb rehabilitation robotic devices use only simplified hand interfaces. This is in contrast to the importance of the hand in regaining function after neurological injury. Computer vision technology has been used to identify hand posture in the field of Human Computer Interaction, but this approach has not been translated to the rehabilitation context. We describe a computer vision-based classifier that can be used to discriminate rehabilitation-relevant hand postures, and could be integrated into a virtual reality-based upper limb rehabilitation system. The proposed system was tested on a set of video recordings from able-bodied individuals performing cylindrical grasps, lateral key grips, and tip-to-tip pinches. The overall classification success rate was 91.2%, and was above 98% for 6 out of the 10 subjects. © 2011 IEEE

  1. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal

    NASA Astrophysics Data System (ADS)

    Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin

    2016-05-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.

  2. Directly Comparing Computer and Human Performance in Language Understanding and Visual Reasoning.

    ERIC Educational Resources Information Center

    Baker, Eva L.; And Others

    Evaluation models are being developed for assessing artificial intelligence (AI) systems in terms of similar performance by groups of people. Natural language understanding and vision systems are the areas of concentration. In simplest terms, the goal is to norm a given natural language system's performance on a sample of people. The specific…

  3. A Framework for People Re-Identification in Multi-Camera Surveillance Systems

    ERIC Educational Resources Information Center

    Ammar, Sirine; Zaghden, Nizar; Neji, Mahmoud

    2017-01-01

    People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance system with disjoint cameras. This paper is focused on the implementation of a human re-identification system. First the face of detected people is divided into three parts and some soft-biometric traits are…

  4. Optimizing the temporal dynamics of light to human perception.

    PubMed

    Rieiro, Hector; Martinez-Conde, Susana; Danielson, Andrew P; Pardo-Vazquez, Jose L; Srivastava, Nishit; Macknik, Stephen L

    2012-11-27

    No previous research has tuned the temporal characteristics of light-emitting devices to enhance brightness perception in human vision, despite the potential for significant power savings. The role of stimulus duration on perceived contrast is unclear, due to contradiction between the models proposed by Bloch and by Broca and Sulzer over 100 years ago. We propose that the discrepancy is accounted for by the observer's "inherent expertise bias," a type of experimental bias in which the observer's life-long experience with interpreting the sensory world overcomes perceptual ambiguities and biases experimental outcomes. By controlling for this and all other known biases, we show that perceived contrast peaks at durations of 50-100 ms, and we conclude that the Broca-Sulzer effect best describes human temporal vision. We also show that the plateau in perceived brightness with stimulus duration, described by Bloch's law, is a previously uncharacterized type of temporal brightness constancy that, like classical constancy effects, serves to enhance object recognition across varied lighting conditions in natural vision-although this is a constancy effect that normalizes perception across temporal modulation conditions. A practical outcome of this study is that tuning light-emitting devices to match the temporal dynamics of the human visual system's temporal response function will result in significant power savings.

  5. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  6. Science Opportunities Enabled by NASA's Constellation System: Interim Report

    NASA Astrophysics Data System (ADS)

    Committee On Science Opportunities Enabled By Nasa'S Constellation System, National Research Council

    To begin implementation of the Vision for Space Exploration (recently renamed "United States Space Exploration Policy"), NASA has begun development of new launch vehicles and a human-carrying spacecraft that are collectively called the Constellation System. In November 2007, NASA asked the NRC to evaluate the potential for the Constellation System to enable new space science opportunities. For this interim report, 11 existing "Vision Mission" studies of advanced space science mission concepts inspired by earlier NASA forward-looking studies were evaluated. The focus was to assess the concepts and group them into two categories: more-deserving or less deserving of future study. This report presents a description of the Constellation System and its opportunities for enabling new space science opportunities, and a systematic analysis of the 11 Vision Mission studies. For the final report, the NRC issued a request for information to the relevant communities to obtain ideas for other mission concepts that will be assessed by the study committee, and several issues addressed only briefly in the interim report will be explored more fully.

  7. An analog VLSI chip emulating polarization vision of Octopus retina.

    PubMed

    Momeni, Massoud; Titus, Albert H

    2006-01-01

    Biological systems provide a wealth of information which form the basis for human-made artificial systems. In this work, the visual system of Octopus is investigated and its polarization sensitivity mimicked. While in actual Octopus retina, polarization vision is mainly based on the orthogonal arrangement of its photoreceptors, our implementation uses a birefringent micropolarizer made of YVO4 and mounted on a CMOS chip with neuromorphic circuitry to process linearly polarized light. Arranged in an 8 x 5 array with two photodiodes per pixel, each consuming typically 10 microW, this circuitry mimics both the functionality of individual Octopus retina cells by computing the state of polarization and the interconnection of these cells through a bias-controllable resistive network.

  8. Ultrashort-pulse lasers treating the crystalline lens: will they cause vision-threatening cataract? (An American Ophthalmological Society thesis).

    PubMed

    Krueger, Ronald R; Uy, Harvey; McDonald, Jared; Edwards, Keith

    2012-12-01

    To demonstrate that ultrashort-pulse laser treatment in the crystalline lens does not form a focal, progressive, or vision-threatening cataract. An Nd:vanadate picosecond laser (10 ps) with prototype delivery system was used. Primates: 11 rhesus monkey eyes were prospectively treated at the University of Wisconsin (energy 25-45 μJ/pulse and 2.0-11.3M pulses per lens). Analysis of lens clarity and fundus imaging was assessed postoperatively for up to 4½ years (5 eyes). Humans: 80 presbyopic patients were prospectively treated in one eye at the Asian Eye Institute in the Philippines (energy 10 μJ/pulse and 0.45-1.45M pulses per lens). Analysis of lens clarity, best-corrected visual acuity, and subjective symptoms was performed at 1 month, prior to elective lens extraction. Bubbles were immediately seen, with resolution within the first 24 to 48 hours. Afterwards, the laser pattern could be seen with faint, noncoalescing, pinpoint micro-opacities in both primate and human eyes. In primates, long-term follow-up at 4½ years showed no focal or progressive cataract, except in 2 eyes with preexisting cataract. In humans, <25% of patients with central sparing (0.75 and 1.0 mm radius) lost 2 or more lines of best spectacle-corrected visual acuity at 1 month, and >70% reported acceptable or better distance vision and no or mild symptoms. Meanwhile, >70% without sparing (0 and 0.5 mm radius) lost 2 or more lines, and most reported poor or severe vision and symptoms. Focal, progressive, and vision-threatening cataracts can be avoided by lowering the laser energy, avoiding prior cataract, and sparing the center of the lens.

  9. Ultrashort-Pulse Lasers Treating the Crystalline Lens: Will They Cause Vision-Threatening Cataract? (An American Ophthalmological Society Thesis)

    PubMed Central

    Krueger, Ronald R.; Uy, Harvey; McDonald, Jared; Edwards, Keith

    2012-01-01

    Purpose: To demonstrate that ultrashort-pulse laser treatment in the crystalline lens does not form a focal, progressive, or vision-threatening cataract. Methods: An Nd:vanadate picosecond laser (10 ps) with prototype delivery system was used. Primates: 11 rhesus monkey eyes were prospectively treated at the University of Wisconsin (energy 25–45 μJ/pulse and 2.0–11.3M pulses per lens). Analysis of lens clarity and fundus imaging was assessed postoperatively for up to 4½ years (5 eyes). Humans: 80 presbyopic patients were prospectively treated in one eye at the Asian Eye Institute in the Philippines (energy 10 μJ/pulse and 0.45–1.45M pulses per lens). Analysis of lens clarity, best-corrected visual acuity, and subjective symptoms was performed at 1 month, prior to elective lens extraction. Results: Bubbles were immediately seen, with resolution within the first 24 to 48 hours. Afterwards, the laser pattern could be seen with faint, noncoalescing, pinpoint micro-opacities in both primate and human eyes. In primates, long-term follow-up at 4½ years showed no focal or progressive cataract, except in 2 eyes with preexisting cataract. In humans, <25% of patients with central sparing (0.75 and 1.0 mm radius) lost 2 or more lines of best spectacle-corrected visual acuity at 1 month, and >70% reported acceptable or better distance vision and no or mild symptoms. Meanwhile, >70% without sparing (0 and 0.5 mm radius) lost 2 or more lines, and most reported poor or severe vision and symptoms. Conclusions: Focal, progressive, and vision-threatening cataracts can be avoided by lowering the laser energy, avoiding prior cataract, and sparing the center of the lens. PMID:23818739

  10. What can mice tell us about how vision works?

    PubMed Central

    Huberman, Andrew D.; Niell, Cristopher M.

    2012-01-01

    Understanding the neural basis of visual perception is a longstanding fundamental goal of neuroscience. Historically, most vision studies were carried out on humans, macaque monkeys and cats. Over the last five years, however, a growing number of researchers have begun using mice to parse the mechanisms underlying visual processing- the rationale is that despite having relatively poor acuity, mice are unmatched in terms of the variety and sophistication of tools available to label, monitor and manipulate specific cell types and circuits. In this review, we discuss recent advances in understanding the mouse visual system at the anatomical, receptive field and perceptual level, focusing on the opportunities and constraints those features provide toward the goal of understanding how vision works. PMID:21840069

  11. High-Resolution Adaptive Optics Test-Bed for Vision Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilks, S C; Thomspon, C A; Olivier, S S

    2001-09-27

    We discuss the design and implementation of a low-cost, high-resolution adaptive optics test-bed for vision research. It is well known that high-order aberrations in the human eye reduce optical resolution and limit visual acuity. However, the effects of aberration-free eyesight on vision are only now beginning to be studied using adaptive optics to sense and correct the aberrations in the eye. We are developing a high-resolution adaptive optics system for this purpose using a Hamamatsu Parallel Aligned Nematic Liquid Crystal Spatial Light Modulator. Phase-wrapping is used to extend the effective stroke of the device, and the wavefront sensing and wavefrontmore » correction are done at different wavelengths. Issues associated with these techniques will be discussed.« less

  12. Polish Experience of Implementing Vision Zero.

    PubMed

    Jamroz, Kazimierz; Michalski, Lech; Żukowska, Joanna

    2017-01-01

    The aim of this study is to present an outline and the principles of Poland's road safety strategic programming as it has developed over the last 25 years since the first Integrated Road Safety System with a strong focus on Sweden's "Vision Zero". Countries that have successfully improved road safety have done so by following strategies centred around the idea that people are not infallible and will make mistakes. The human body can only take a limited amount of energy upon impact, so roads, vehicles and road safety programmes must be designed to address this. The article gives a summary of Poland's experience of programming preventative measures that have "Vision Zero" as their basis. It evaluates the effectiveness of relevant programmes.

  13. Multiscale Enaction Model (MEM): the case of complexity and “context-sensitivity” in vision

    PubMed Central

    Laurent, Éric

    2014-01-01

    I review the data on human visual perception that reveal the critical role played by non-visual contextual factors influencing visual activity. The global perspective that progressively emerges reveals that vision is sensitive to multiple couplings with other systems whose nature and levels of abstraction in science are highly variable. Contrary to some views where vision is immersed in modular hard-wired modules, rather independent from higher-level or other non-cognitive processes, converging data gathered in this article suggest that visual perception can be theorized in the larger context of biological, physical, and social systems with which it is coupled, and through which it is enacted. Therefore, any attempt to model complexity and multiscale couplings, or to develop a complex synthesis in the fields of mind, brain, and behavior, shall involve a systematic empirical study of both connectedness between systems or subsystems, and the embodied, multiscale and flexible teleology of subsystems. The conceptual model (Multiscale Enaction Model [MEM]) that is introduced in this paper finally relates empirical evidence gathered from psychology to biocomputational data concerning the human brain. Both psychological and biocomputational descriptions of MEM are proposed in order to help fill in the gap between scales of scientific analysis and to provide an account for both the autopoiesis-driven search for information, and emerging perception. PMID:25566115

  14. 77 FR 2342 - Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation..., Enhanced Flight Vision/ Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of the seventeenth meeting of RTCA Special Committee 213, Enhanced Flight Vision...

  15. 75 FR 38863 - Tenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-06

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight [[Page 38864

  16. Thimble microscope system

    NASA Astrophysics Data System (ADS)

    Kamal, Tahseen; Rubinstein, Jaden; Watkins, Rachel; Cen, Zijian; Kong, Gary; Lee, W. M.

    2016-12-01

    Wearable computing devices, e.g. Google Glass, Smart watch, embodies the new human design frontier, where technology interfaces seamlessly with human gestures. During examination of any subject in the field (clinic, surgery, agriculture, field survey, water collection), our sensory peripherals (touch and vision) often go hand-in-hand. The sensitivity and maneuverability of the human fingers are guided with tight distribution of biological nerve cells, which perform fine motor manipulation over a range of complex surfaces that is often out of sight. Our sight (or naked vision), on the other hand, is generally restricted to line of sight that is ill-suited to view around corner. Hence, conventional imaging methods are often resort to complex light guide designs (periscope, endoscopes etc) to navigate over obstructed surfaces. Using modular design strategies, we constructed a prototype miniature microscope system that is incorporated onto a wearable fixture (thimble). This unique platform allows users to maneuver around a sample and take high resolution microscopic images. In this paper, we provide an exposition of methods to achieve a thimble microscopy; microscope lens fabrication, thimble design, integration of miniature camera and liquid crystal display.

  17. First-in-Human Trial of a Novel Suprachoroidal Retinal Prosthesis

    PubMed Central

    Ayton, Lauren N.; Blamey, Peter J.; Guymer, Robyn H.; Luu, Chi D.; Nayagam, David A. X.; Sinclair, Nicholas C.; Shivdasani, Mohit N.; Yeoh, Jonathan; McCombe, Mark F.; Briggs, Robert J.; Opie, Nicholas L.; Villalobos, Joel; Dimitrov, Peter N.; Varsamidis, Mary; Petoe, Matthew A.; McCarthy, Chris D.; Walker, Janine G.; Barnes, Nick; Burkitt, Anthony N.; Williams, Chris E.; Shepherd, Robert K.; Allen, Penelope J.

    2014-01-01

    Retinal visual prostheses (“bionic eyes”) have the potential to restore vision to blind or profoundly vision-impaired patients. The medical bionic technology used to design, manufacture and implant such prostheses is still in its relative infancy, with various technologies and surgical approaches being evaluated. We hypothesised that a suprachoroidal implant location (between the sclera and choroid of the eye) would provide significant surgical and safety benefits for patients, allowing them to maintain preoperative residual vision as well as gaining prosthetic vision input from the device. This report details the first-in-human Phase 1 trial to investigate the use of retinal implants in the suprachoroidal space in three human subjects with end-stage retinitis pigmentosa. The success of the suprachoroidal surgical approach and its associated safety benefits, coupled with twelve-month post-operative efficacy data, holds promise for the field of vision restoration. Trial Registration Clinicaltrials.gov NCT01603576 PMID:25521292

  18. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  19. Speed, spatial, and temporal tuning of rod and cone vision in mouse.

    PubMed

    Umino, Yumiko; Solessio, Eduardo; Barlow, Robert B

    2008-01-02

    Rods and cones subserve mouse vision over a 100 million-fold range of light intensity (-6 to 2 log cd m(-2)). Rod pathways tune vision to the temporal frequency of stimuli (peak, 0.75 Hz) and cone pathways to their speed (peak, approximately 12 degrees/s). Both pathways tune vision to the spatial components of stimuli (0.064-0.128 cycles/degree). The specific photoreceptor contributions were determined by two-alternative, forced-choice measures of contrast thresholds for optomotor responses of C57BL/6J mice with normal vision, Gnat2(cpfl3) mice without functional cones, and Gnat1-/- mice without functional rods. Gnat2(cpfl3) mice (threshold, -6.0 log cd m(-2)) cannot see rotating gratings above -2.0 log cd m(-2) (photopic vision), and Gnat1-/- mice (threshold, -4.0 log cd m(-2)) are blind below -4.0 log cd m(-2) (scotopic vision). Both genotypes can see in the transitional mesopic range (-4.0 to -2.0 log cd m(-2)). Mouse rod and cone sensitivities are similar to those of human. This parametric study characterizes the functional properties of the mouse visual system, revealing the rod and cone contributions to contrast sensitivity and to the temporal processing of visual stimuli.

  20. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  1. Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.

    PubMed

    Xiong, Chunshui; Huang, Lei; Liu, Changping

    2014-01-01

    Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.

  2. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  3. Automated egg grading system using computer vision: Investigation on weight measure versus shape parameters

    NASA Astrophysics Data System (ADS)

    Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul

    2018-04-01

    Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.

  4. Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles

    PubMed Central

    Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro

    2016-01-01

    The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional–integral–derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle. PMID:27110793

  5. Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles.

    PubMed

    Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro

    2016-04-22

    The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional-integral-derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.

  6. Fundamentals and advances in the development of remote welding fabrication systems

    NASA Technical Reports Server (NTRS)

    Agapakis, J. E.; Masubuchi, K.; Von Alt, C.

    1986-01-01

    Operational and man-machine issues for welding underwater, in outer space, and at other remote sites are investigated, and recent process developments are described. Probable remote welding missions are classified, and the essential characteristics of fundamental remote welding tasks are analyzed. Various possible operational modes for remote welding fabrication are identified, and appropriate roles for humans and machines are suggested. Human operator performance in remote welding fabrication tasks is discussed, and recent advances in the development of remote welding systems are described, including packaged welding systems, stud welding systems, remotely operated welding systems, and vision-aided remote robotic welding and autonomous welding systems.

  7. Utilization of the Space Vision System as an Augmented Reality System For Mission Operations

    NASA Technical Reports Server (NTRS)

    Maida, James C.; Bowen, Charles

    2003-01-01

    Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to flight hardware capable of utilizing this technology. This is the basis for this proposed Space Human Factors Engineering project, the determination of the display symbology within the performance limits of the Space Vision System that will objectively improve human performance. This utilization of existing flight hardware will greatly reduce the costs of implementation for flight. Besides being used onboard shuttle and space station and as a ground-based system for mission operational support, it also has great potential for science and medical training and diagnostics, remote learning, team learning, video/media conferencing, and educational outreach.

  8. NASA Human Health and Performance Strategy

    NASA Technical Reports Server (NTRS)

    Davis, Jeffrey R.

    2012-01-01

    In May 2007, what was then the Space Life Sciences Directorate, issued the 2007 Space Life Sciences Strategy for Human Space Exploration. In January 2012, leadership and key directorate personnel were once again brought together to assess the current and expected future environment against its 2007 Strategy and the Agency and Johnson Space Center goals and strategies. The result was a refined vision and mission, and revised goals, objectives, and strategies. One of the first changes implemented was to rename the directorate from Space Life Sciences to Human Health and Performance to better reflect our vision and mission. The most significant change in the directorate from 2007 to the present is the integration of the Human Research Program and Crew Health and Safety activities. Subsequently, the Human Health and Performance Directorate underwent a reorganization to achieve enhanced integration of research and development with operations to better support human spaceflight and International Space Station utilization. These changes also enable a more effective and efficient approach to human system risk mitigation. Since 2007, we have also made significant advances in external collaboration and implementation of new business models within the directorate and the Agency, and through two newly established virtual centers, the NASA Human Health and Performance Center and the Center of Excellence for Collaborative Innovation. Our 2012 Strategy builds upon these successes to address the Agency s increased emphasis on societal relevance and being a leader in research and development and innovative business and communications practices. The 2012 Human Health and Performance Vision is to lead the world in human health and performance innovations for life in space and on Earth. Our mission is to enable optimization of human health and performance throughout all phases of spaceflight. All HHPD functions are ultimately aimed at achieving this mission. Our activities enable mission success, optimizing human health and productivity in space before, during, and after the actual spaceflight experience of our crews, and include support for ground-based functions. Many of our spaceflight innovations also provide solutions for terrestrial challenges, thereby enhancing life on Earth. Our strategic goals are aimed at leading human exploration and ISS utilization, leading human health and performance internationally, excelling in management and advancement of innovations in health and human system integration, and expanding relevance to life on Earth and creating enduring support and enthusiasm for space exploration.

  9. NASA Human Health and Performance Strategy

    NASA Technical Reports Server (NTRS)

    Davis, Jeffrey R.

    2012-01-01

    In May 2007, what was then the Space Life Sciences Directorate, issued the 2007 Space Life Sciences Strategy for Human Space Exploration. In January 2012, leadership and key directorate personnel were once again brought together to assess the current and expected future environment against its 2007 Strategy and the Agency and Johnson Space Center goals and strategies. The result was a refined vision and mission, and revised goals, objectives, and strategies. One of the first changes implemented was to rename the directorate from Space Life Sciences to Human Health and Performance to better reflect our vision and mission. The most significant change in the directorate from 2007 to the present is the integration of the Human Research Program and Crew Health and Safety activities. Subsequently, the Human Health and Performance Directorate underwent a reorganization to achieve enhanced integration of research and development with operations to better support human spaceflight and International Space Station utilization. These changes also enable a more effective and efficient approach to human system risk mitigation. Since 2007, we have also made significant advances in external collaboration and implementation of new business models within the directorate and the Agency, and through two newly established virtual centers, the NASA Human Health and Performance Center and the Center of Excellence for Collaborative Innovation. Our 2012 Strategy builds upon these successes to address the Agency's increased emphasis on societal relevance and being a leader in research and development and innovative business and communications practices. The 2012 Human Health and Performance Vision is to lead the world in human health and performance innovations for life in space and on Earth. Our mission is to enable optimization of human health and performance throughout all phases of spaceflight. All HH&P functions are ultimately aimed at achieving this mission. Our activities enable mission success, optimizing human health and productivity in space before, during, and after the actual spaceflight experience of our crews, and include support for ground-­- based functions. Many of our spaceflight innovations also provide solutions for terrestrial challenges, thereby enhancing life on Earth. Our strategic goals are aimed at leading human exploration and ISS utilization, leading human health and performance internationally, excelling in management and advancement of innovations in health and human system integration, and expanding relevance to life on Earth and creating enduring support and enthusiasm for space exploration.

  10. Residual perception of biological motion in cortical blindness.

    PubMed

    Ruffieux, Nicolas; Ramon, Meike; Lao, Junpeng; Colombo, Françoise; Stacchi, Lisa; Borruat, François-Xavier; Accolla, Ettore; Annoni, Jean-Marie; Caldara, Roberto

    2016-12-01

    From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Planetary Science Training for NASA's Astronauts: Preparing for Future Human Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Bleacher, J. E.; Evans, C. A.; Graff, T. G.; Young, K. E.; Zeigler, R.

    2017-02-01

    Astronauts selected in 2017 and in future years will carry out in situ planetary science research during exploration of the solar system. Training to enable this goal is underway and is flexible to accommodate an evolving planetary science vision.

  12. Toward detection of marine vehicles on horizon from buoy camera

    NASA Astrophysics Data System (ADS)

    Fefilatyev, Sergiy; Goldgof, Dmitry B.; Langebrake, Lawrence

    2007-10-01

    This paper presents a new technique for automatic detection of marine vehicles in open sea from a buoy camera system using computer vision approach. Users of such system include border guards, military, port safety and flow management, sanctuary protection personnel. The system is intended to work autonomously, taking images of the surrounding ocean surface and analyzing them on the subject of presence of marine vehicles. The goal of the system is to detect an approximate window around the ship and prepare the small image for transmission and human evaluation. The proposed computer vision-based algorithm combines horizon detection method with edge detection and post-processing. The dataset of 100 images is used to evaluate the performance of proposed technique. We discuss promising results of ship detection and suggest necessary improvements for achieving better performance.

  13. Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.

    PubMed

    Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique

    Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.

  14. Advances in understanding the molecular basis of the first steps in color vision

    PubMed Central

    Hofmann, Lukas; Palczewski, Krzysztof

    2015-01-01

    Serving as one of our primary environmental inputs, vision is the most sophisticated sensory system in humans. Here, we present recent findings derived from energetics, genetics and physiology that provide a more advanced understanding of color perception in mammals. Energetics of cis–trans isomerization of 11-cis-retinal accounts for color perception in the narrow region of the electromagnetic spectrum and how human eyes can absorb light in the near infrared (IR) range. Structural homology models of visual pigments reveal complex interactions of the protein moieties with the light sensitive chromophore 11-cis-retinal and that certain color blinding mutations impair secondary structural elements of these G protein-coupled receptors (GPCRs). Finally, we identify unsolved critical aspects of color tuning that require future investigation. PMID:26187035

  15. The Robotic Lunar Exploration Program (RLEP): An Introduction to the Goals, Approach, and Architecture

    NASA Technical Reports Server (NTRS)

    Watzin, James G.; Burt, Joseph; Tooley, Craig

    2004-01-01

    The Vision for Space Exploration calls for undertaking lunar exploration activities to enable sustained human and robotic exploration of Mars and beyond, including more distant destinations in the solar system. In support of this vision, the Robotic Lunar Exploration Program (RLEP) is expected to execute a series of robotic missions to the Moon, starting in 2008, in order to pave the way for further human space exploration. This paper will give an introduction to the RLEP program office, its role and its goals, and the approach it is taking to executing the charter of the program. The paper will also discuss candidate architectures that are being studied as a framework for defining the RLEP missions and the context in which they will evolve.

  16. Robotic lunar exploration: Architectures, issues and options

    NASA Astrophysics Data System (ADS)

    Mankins, John C.; Valerani, Ernesto; Della Torre, Alberto

    2007-06-01

    The US ‘vision for space exploration’ articulated at the beginning of 2004 encompasses a broad range of human and robotic space missions, including missions to the Moon, Mars and destinations beyond. It establishes clear goals and objectives, yet sets equally clear budgetary ‘boundaries’ by stating firm priorities, including ‘tough choices’ regarding current major NASA programs. The new vision establishes as policy the goals of pursuing commercial and international collaboration in realizing future space exploration missions. Also, the policy envisions that advances in human and robotic mission technologies will play a key role—both as enabling and as a major public benefit that will result from implementing that vision. In pursuing future international space exploration goals, the exploration of the Moon during the coming decades represents a particularly appealing objective. The Moon provides a unique venue for exploration and discovery—including the science of the Moon (e.g., geological studies), science from the Moon (e.g., astronomical observatories), and science on the Moon (including both basic research, such as biological laboratory science, and applied research and development, such as the use of the Moon as a test bed for later exploration). The Moon may also offer long-term opportunties for utilization—including Earth observing applications and commercial developments. During the coming decade, robotic lunar exploration missions will play a particularly important role, both in their own right and as precursors to later, more ambitious human and robotic exploration and development efforts. The following paper discusses some of the issues and opportunities that may arise in establishing plans for future robotic lunar exploration. Particular emphasis is placed on four specific elements of future robotic infrastructure: Earth Moon in-space transportation systems; lunar orbiters; lunar descent and landing systems; and systems for long-range transport on the Moon.

  17. Human-Robot Control Strategies for the NASA/DARPA Robonaut

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.

    2003-01-01

    The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.

  18. Autonomous spacecraft landing through human pre-attentive vision.

    PubMed

    Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; de Croon, Guido C H E

    2012-06-01

    In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting.

  19. A Return to the Human in Humanism: A Response to Hansen's Humanistic Vision

    ERIC Educational Resources Information Center

    Lemberger, Matthew E.

    2012-01-01

    In his extension of the humanistic vision, Hansen (2012) recommends that counseling practitioners and scholars adopt operations that are consistent with his definition of a multiple-perspective philosophy. Alternatively, the author of this article believes that Hansen has reduced the capacity of the human to interpret meaning through quantitative…

  20. The Science Goals of NASA's Exploration Initiative

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.; Grunsfeld, John

    2004-01-01

    The recently released policy directive, "A Renewed Spirit of Discovery: The President's Vision for U. S. Space Exploration," seeks to advance the U. S. scientific, security and economic interest through a program of space exploration which will robotically explore the solar system and extend human presence to the Moon, Mars and beyond. NASA's implementation of this vision will be guided by compelling questions of scientific and societal importance, including the origin of our Solar System and the search for life beyond Earth. The Exploration Roadmap identifies four key targets: the Moon, Mars, the outer Solar System, and extra-solar planets. First, a lunar investigation will set up exploration test beds, search for resources, and study the geological record of the early Solar System. Human missions to the Moon will serve as precursors for human missions to Mars and other destinations, but will also be driven by their support for furthering science. The second key target is the search for past and present water and life on Mars. Following on from discoveries by Spirit and Opportunity, by the end of the decade there will have been an additional rover, a lander and two orbiters studying Mars. These will set the stage for a sample return mission in 2013, increasingly complex robotic investigations, and an eventual human landing. The third key target is the study of underground oceans, biological chemistry, and their potential for life in the outer Solar System. Beginning with the arrival of Cassini at Saturn in July 2004 and a landing on Titan in 2006, the next decade will see an extended investigation of the Jupiter icy moons by a mission making use of Project Prometheus, a program to develop space nuclear power and nuclear-electric propulsion. Finally, the search for Earth-like planets and life includes a series of telescopic missions designed to find and characterize extra-solar planets and search them for evidence of life. These missions include HST and Spitzer, operating now; Kepler, SIM, JWST, and TPF, currently under development; and the vision missions, Life Finder and Planet Imager, which will possibly be constructed in space by astronauts.

  1. Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis

    NASA Astrophysics Data System (ADS)

    Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario

    2015-12-01

    Objective. Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. Approach. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. Main results. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. Significance. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.

  2. Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis.

    PubMed

    Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario

    2015-12-01

    Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.

  3. A Synthetic Vision Preliminary Integrated Safety Analysis

    NASA Technical Reports Server (NTRS)

    Hemm, Robert; Houser, Scott

    2001-01-01

    This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.

  4. Heliospheric Physics and NASA's Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Minow, Joseph I.

    2007-01-01

    The Vision for Space Exploration outlines NASA's development of a new generation of human-rated launch vehicles to replace the Space Shuttle and an architecture for exploring the Moon and Mars. The system--developed by the Constellation Program--includes a near term (approx. 2014) capability to provide crew and cargo service to the International Space Station after the Shuttle is retired in 2010 and a human return to the Moon no later than 2020. Constellation vehicles and systems will necessarily be required to operate efficiently, safely, and reliably in the space plasma and radiation environments of low Earth orbit, the Earth's magnetosphere, interplanetary space, and on the lunar surface. This presentation will provide an overview of the characteristics of space radiation and plasma environments relevant to lunar programs including the trans-lunar injection and trans-Earth injection trajectories through the Earth's radiation belts, solar wind surface dose and plasma wake charging environments in near lunar space, energetic solar particle events, and galactic cosmic rays and discusses the design and operational environments being developed for lunar program requirements to assure that systems operate successfully in the space environment.

  5. Robonaut Mobile Autonomy: Initial Experiments

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Goza, S. M.; Tyree, K. S.; Huber, E. L.

    2006-01-01

    A mobile version of the NASA/DARPA Robonaut humanoid recently completed initial autonomy trials working directly with humans in cluttered environments. This compact robot combines the upper body of the Robonaut system with a Segway Robotic Mobility Platform yielding a dexterous, maneuverable humanoid ideal for interacting with human co-workers in a range of environments. This system uses stereovision to locate human teammates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form complex behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  6. Brightness, hue, and saturation in photopic vision: a result of luminance and wavelength in the cellular phase-grating optical 3D chip of the inverted retina

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    1994-10-01

    In photopic vision, two physical variables (luminance and wavelength) are transformed into three psychological variables (brightness, hue, and saturation). Following on from 3D grating optical explanations of aperture effects (Stiles-Crawford effects SCE I and II), all three variables can be explained via a single 3D chip effect. The 3D grating optical calculations are carried out using the classical von Laue equation and demonstrated using the example of two experimentally confirmed observations in human vision: saturation effects for monochromatic test lights between 485 and 510 nm in the SCE II and the fact that many test lights reverse their hue shift in the SCE II when changing from moderate to high luminances compared with that on changing from low to medium luminances. At the same time, information is obtained on the transition from the trichromatic color system in the retina to the opponent color system.

  7. Biotechnology

    NASA Image and Video Library

    2003-01-22

    ProVision Technologies, a NASA research partnership center at Sternis Space Center in Mississippi, has developed a new hyperspectral imaging (HSI) system that is much smaller than the original large units used aboard remote sensing aircraft and satellites. The new apparatus is about the size of a breadbox. HSI may be useful to ophthalmologists to study and diagnose eye health, both on Earth and in space, by examining the back of the eye to determine oxygen and blood flow quickly and without any invasion. ProVision's hyperspectral imaging system can scan the human eye and produce a graph showing optical density or light absorption, which can then be compared to a graph from a normal eye. Scans of the macula, optic disk or optic nerve head, and blood vessels can be used to detect anomalies and identify diseases in this delicate and important organ. ProVision has already developed a relationship with the University of Alabama at Birmingham, but is still on the lookout for a commercial partner in this application.

  8. Biotechnology

    NASA Image and Video Library

    2003-01-22

    ProVision Technologies, a NASA commercial space center at Sternis Space Center in Mississippi, has developed a new hyperspectral imaging (HSI) system that is much smaller than the original large units used aboard remote sensing aircraft and satellites. The new apparatus is about the size of a breadbox. HSI may be useful to ophthalmologists to study and diagnose eye health, both on Earth and in space, by examining the back of the eye to determine oxygen and blood flow quickly and without any invasion. ProVision's hyperspectral imaging system can scan the human eye and produce a graph showing optical density or light absorption, which can then be compared to a graph from a normal eye. Scans of the macula, optic disk or optic nerve head, and blood vessels can be used to detect anomalies and identify diseases in this delicate and important organ. ProVision has already developed a relationship with the University of Alabama at Birmingham, but is still on the lookout for a commercial partner in this application.

  9. The Method of Curvatures.

    ERIC Educational Resources Information Center

    Greenslade, Thomas B., Jr.; Miller, Franklin, Jr.

    1981-01-01

    Describes method for locating images in simple and complex systems of thin lenses and spherical mirrors. The method helps students to understand differences between real and virtual images. It is helpful in discussing the human eye and the correction of imperfect vision by the use of glasses. (Author/SK)

  10. Computational models of human vision with applications

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.

  11. Application of digital human modeling and simulation for vision analysis of pilots in a jet aircraft: a case study.

    PubMed

    Karmakar, Sougata; Pal, Madhu Sudan; Majumdar, Deepti; Majumdar, Dhurjati

    2012-01-01

    Ergonomic evaluation of visual demands becomes crucial for the operators/users when rapid decision making is needed under extreme time constraint like navigation task of jet aircraft. Research reported here comprises ergonomic evaluation of pilot's vision in a jet aircraft in virtual environment to demonstrate how vision analysis tools of digital human modeling software can be used effectively for such study. Three (03) dynamic digital pilot models, representative of smallest, average and largest Indian pilot population were generated from anthropometric database and interfaced with digital prototype of the cockpit in Jack software for analysis of vision within and outside the cockpit. Vision analysis tools like view cones, eye view windows, blind spot area, obscuration zone, reflection zone etc. were employed during evaluation of visual fields. Vision analysis tool was also used for studying kinematic changes of pilot's body joints during simulated gazing activity. From present study, it can be concluded that vision analysis tool of digital human modeling software was found very effective in evaluation of position and alignment of different displays and controls in the workstation based upon their priorities within the visual fields and anthropometry of the targeted users, long before the development of its physical prototype.

  12. Nonhuman Primate Studies to Advance Vision Science and Prevent Blindness.

    PubMed

    Mustari, Michael J

    2017-12-01

    Most primate behavior is dependent on high acuity vision. Optimal visual performance in primates depends heavily upon frontally placed eyes, retinal specializations, and binocular vision. To see an object clearly its image must be placed on or near the fovea of each eye. The oculomotor system is responsible for maintaining precise eye alignment during fixation and generating eye movements to track moving targets. The visual system of nonhuman primates has a similar anatomical organization and functional capability to that of humans. This allows results obtained in nonhuman primates to be applied to humans. The visual and oculomotor systems of primates are immature at birth and sensitive to the quality of binocular visual and eye movement experience during the first months of life. Disruption of postnatal experience can lead to problems in eye alignment (strabismus), amblyopia, unsteady gaze (nystagmus), and defective eye movements. Recent studies in nonhuman primates have begun to discover the neural mechanisms associated with these conditions. In addition, genetic defects that target the retina can lead to blindness. A variety of approaches including gene therapy, stem cell treatment, neuroprosthetics, and optogenetics are currently being used to restore function associated with retinal diseases. Nonhuman primates often provide the best animal model for advancing fundamental knowledge and developing new treatments and cures for blinding diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of the National Academy of Sciences. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  13. Air and Water System (AWS) Design and Technology Selection for the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Jones, Harry; Kliss, Mark

    2005-01-01

    This paper considers technology selection for the crew air and water recycling systems to be used in long duration human space exploration. The specific objectives are to identify the most probable air and water technologies for the vision for space exploration and to identify the alternate technologies that might be developed. The approach is to conduct a preliminary first cut systems engineering analysis, beginning with the Air and Water System (AWS) requirements and the system mass balance, and then define the functional architecture, review the International Space Station (ISS) technologies, and discuss alternate technologies. The life support requirements for air and water are well known. The results of the mass flow and mass balance analysis help define the system architectural concept. The AWS includes five subsystems: Oxygen Supply, Condensate Purification, Urine Purification, Hygiene Water Purification, and Clothes Wash Purification. AWS technologies have been evaluated in the life support design for ISS node 3, and in earlier space station design studies, in proposals for the upgrade or evolution of the space station, and in studies of potential lunar or Mars missions. The leading candidate technologies for the vision for space exploration are those planned for Node 3 of the ISS. The ISS life support was designed to utilize Space Station Freedom (SSF) hardware to the maximum extent possible. The SSF final technology selection process, criteria, and results are discussed. Would it be cost-effective for the vision for space exploration to develop alternate technology? This paper will examine this and other questions associated with AWS design and technology selection.

  14. A compact human-powered energy harvesting system

    NASA Astrophysics Data System (ADS)

    Rao, Yuan; McEachern, Kelly M.; Arnold, David P.

    2013-12-01

    This paper presents a fully functional, self-sufficient body-worn energy harvesting system for passively capturing energy from human motion, with the long-term vision of supplying power to portable, wearable, or even implanted electronic devices. The system requires no external power supplies and can bootstrap from zero-state-of-charge to generate electrical energy from walking, jogging and cycling; convert the induced ac voltage to a dc voltage; and then boost and regulate the dc voltage to charge a Li-ion-polymer battery. Tested under normal human activities (walking, jogging, cycling) when worn on different parts of the body, the 70 cm3 system is shown to charge a 3.7 V rechargeable battery at charge rates ranging from 33 μW to 234 μW.

  15. The Role of the Community Nurse in Promoting Health and Human Dignity-Narrative Review Article

    PubMed Central

    Muntean, Ana; Tomita, Mihaela; Ungureanu, Roxana

    2013-01-01

    Abstract Background: Population health, as defined by WHO in its constitution, is out “a physical, mental and social complete wellbeing”. At the basis of human welfare is the human dignity. This dimension requires an integrated vision of health care. The ecosystemical vision of Bronfenbrenner allows highlighting the unexpected connections between social macro system based on values and the micro system consisting of individual and family. Community nurse is aimed to transgression in practice of education and care, the respect for human dignity, the bonds among values and practices of the community and the physical health of individuals. In Romania, the promotion of community nurse began in 2002, through the project promoting the social inclusion by developing human and institutional resources within community nursery of the National School of Public Health, Management and Education in Healthcare Bucharest. The community nurse became apparent in 10 counties included in the project. Considering the respect for human dignity as an axiomatic value for the community nurse interventions, we stress the need for developing a primary care network in Romania. The proof is based on the analysis of the concept of human dignity within health care, as well as the secondary analysis of health indicators, in the year of 2010, of the 10 counties included in the project. Our conclusions will draw attention to the need of community nurse and, will open directions for new researches and developments needed to promote primary health in Romania. PMID:26060614

  16. Maria Montessori's Cosmic Vision, Cosmic Plan, and Cosmic Education

    ERIC Educational Resources Information Center

    Grazzini, Camillo

    2013-01-01

    This classic position of the breadth of Cosmic Education begins with a way of seeing the human's interaction with the world, continues on to the grandeur in scale of time and space of that vision, then brings the interdependency of life where each growing human becomes a participating adult. Mr. Grazzini confronts the laws of human nature in…

  17. Display Parameters and Requirements

    NASA Astrophysics Data System (ADS)

    Bahadur, Birendra

    The following sections are included: * INTRODUCTION * HUMAN FACTORS * Anthropometry * Sensory * Cognitive * Discussions * THE HUMAN VISUAL SYSTEM - CAPABILITIES AND LIMITATIONS * Cornea * Pupil and Iris * Lens * Vitreous Humor * Retina * RODS - NIGHT VISION * CONES - DAY VISION * RODS AND CONES - TWILIGHT VISION * VISUAL PIGMENTS * MACULA * BLOOD * CHOROID COAT * Visual Signal Processing * Pathways to the Brain * Spatial Vision * Temporal Vision * Colour Vision * Colour Blindness * DICHROMATISM * Protanopia * Deuteranopia * Tritanopia * ANOMALOUS TRICHROMATISM * Protanomaly * Deuteranomaly * Tritanomaly * CONE MONOCHROMATISM * ROD MONOCHROMATISM * Using Colour Effectively * COLOUR MIXTURES AND THE CHROMATICITY DIAGRAM * Colour Matching Functions and Chromaticity Co-ordinates * CIE 1931 Colour Space * CIE PRIMARIES * CIE COLOUR MATCHING FUNCTIONS AND CHROMATICITY CO-ORDINATES * METHODS FOR DETERMINING TRISTIMULUS VALUES AND COLOUR CO-ORDINATES * Spectral Power Distribution Method * Filter Method * CIE 1931 CHROMATICITY DIAGRAM * ADDITIVE COLOUR MIXTURE * CIE 1976 Chromaticity Diagram * CIE Uniform Colour Spaces and Colour Difference Formulae * CIELUV OR L*u*v* * CIELAB OR L*a*b* * CIE COLOUR DIFFERENCE FORMULAE * Colour Temperature and CIE Standard Illuminants and source * RADIOMETRIC AND PHOTOMETRIC QUANTITIES * Photopic (Vλ and Scotopic (Vλ') Luminous Efficiency Function * Photometric and Radiometric Flux * Luminous and Radiant Intensities * Incidence: Illuminance and Irradiance * Exitance or Emittance (M) * Luminance and Radiance * ERGONOMIC REQUIREMENTS OF DISPLAYS * ELECTRO-OPTICAL PARAMETERS AND REQUIREMENTS * Contrast and Contrast Ratio * Luminance and Brightness * Colour Contrast and Chromaticity * Glare * Other Aspects of Legibility * SHAPE AND SIZE OF CHARACTERS * DEFECTS AND BLEMISHES * FLICKER AND DISTORTION * ANGLE OF VIEW * Switching Speed * Threshold and Threshold Characteristic * Measurement Techniques For Electro-optical Parameters * RADIOMETRIC MEASUREMENTS * Broadband Radiometry or Filtered Photodetector Radiometric Method * Spectroradiometric Method * PHOTOMETRIC MEASUREMENTS * COLOUR MEASUREMENTS * LUMINANCE, CONTRAST RATIO, THRESHOLD CHARACTERISTIC AND POLAR PLOT * SWITCHING SPEED * ELECTRICAL AND LIFE PARAMETERS AND REQUIREMENTS * Operating Voltage, Current Drainage and Power Consumption * Operating Frequency * Life Expectancy * LCD FAILURE MODES * Liquid Crystal Materials * Substrate Glass * Electrode Patterns * Alignment and Aligning Material * Peripheral and End Plug Seal * Spacers * Crossover Material * Polarizers and Reflectors * Connectors * Heater * Colour Filters * Backlighting System * Explanation For Some of the Observed Defects * BLOOMING PIXELS * POLARIZER RELATED DEFECTS * DIFFERENTIAL THERMAL EXPANSION RELATED DEFECTS * ELECTROCHEMICAL AND ELECTROHYDRODYNAMIC RELATED DEFECTS * REVERSE TWIST AND REVERSE TILT * MEMORY OR REMINISCENT CONTRAST * LCD RELIABILRY AND ACCELERATED LIFE TESTING * ACKNOWLEDGEMENTS * REFERENCES * APPENDIX

  18. A Graphical Operator Interface for a Telerobotic Inspection System

    NASA Technical Reports Server (NTRS)

    Kim, W. S.; Tso, K. S.; Hayati, S.

    1993-01-01

    Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  19. A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems

    NASA Astrophysics Data System (ADS)

    Mcfadyen, Aaron; Mejias, Luis

    2016-01-01

    This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.

  20. Helmet-Mounted Displays: Sensation, Perception and Cognition Issues

    DTIC Science & Technology

    2009-01-01

    Inc., web site: http://www.metavr.com/ technology/ papers /syntheticvision.html Helmetag, A., Halbig, C., Kubbat, W., and Schmidt, R. (1999...system-of-systems.” One integral system is a “head-borne vision enhancement” system (an HMD) that provides fused I2/ IR sensor imagery (U.S. Army Natick...Using microwave, radar, I2, infrared ( IR ), and other technology-based imaging sensors, the “seeing” range of the human eye is extended into the

  1. Pre-impact fall detection system using dynamic threshold and 3D bounding box

    NASA Astrophysics Data System (ADS)

    Otanasap, Nuth; Boonbrahm, Poonpong

    2017-02-01

    Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.

  2. Development and Long-Term Verification of Stereo Vision Sensor System for Controlling Safety at Railroad Crossing

    NASA Astrophysics Data System (ADS)

    Hosotani, Daisuke; Yoda, Ikushi; Hishiyama, Yoshiyuki; Sakaue, Katsuhiko

    Many people are involved in accidents every year at railroad crossings, but there is no suitable sensor for detecting pedestrians. We are therefore developing a ubiquitous stereo vision based system for ensuring safety at railroad crossings. In this system, stereo cameras are installed at the corners and are pointed toward the center of the railroad crossing to monitor the passage of people. The system determines automatically and in real-time whether anyone or anything is inside the railroad crossing, and whether anyone remains in the crossing. The system can be configured to automatically switch over to a surveillance monitor or automatically connect to an emergency brake system in the event of trouble. We have developed an original stereovision device and installed the remote controlled experimental system applied human detection algorithm in the commercial railroad crossing. Then we store and analyze image data and tracking data throughout two years for standardization of system requirement specification.

  3. Hi-Vision telecine system using pickup tube

    NASA Astrophysics Data System (ADS)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  4. Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery

    PubMed Central

    Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.

    2017-01-01

    Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95–98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management. PMID:28338047

  5. Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery

    NASA Astrophysics Data System (ADS)

    Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.

    2017-03-01

    Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95-98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management.

  6. Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffith, Douglas; Greitzer, Frank L.

    In his 1960 paper Man-Machine Symbiosis, Licklider predicted that human brains and computing machines will be coupled in a tight partnership that will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today. Today we are on the threshold of resurrecting the vision of symbiosis. While Licklider’s original vision suggested a co-equal relationship, here we discuss an updated vision, neo-symbiosis, in which the human holds a superordinate position in an intelligent human-computer collaborative environment. This paper was originally published as a journal article and is being publishedmore » as a chapter in an upcoming book series, Advances in Novel Approaches in Cognitive Informatics and Natural Intelligence.« less

  7. Genetics Home Reference: color vision deficiency

    MedlinePlus

    ... PROTAN SERIES TRITANOPIA Sources for This Page Deeb SS. Molecular genetics of color-vision deficiencies. Vis Neurosci. 2004 May-Jun;21(3):191-6. Review. Citation on PubMed Deeb SS. The molecular basis of variation in human color vision. Clin ...

  8. Compact and wide-field-of-view head-mounted display

    NASA Astrophysics Data System (ADS)

    Uchiyama, Shoichi; Kamakura, Hiroshi; Karasawa, Joji; Sakaguchi, Masafumi; Furihata, Takeshi; Itoh, Yoshitaka

    1997-05-01

    A compact and wide field of view HMD having 1.32-in full color VGA poly-Si TFT LCDs and simple eyepieces much like LEEP optics has been developed. The total field of view is 80 deg with a 40 deg overlap in its central area. Each optical unit which includes an LCD and eyepiece is 46 mm in diameter and 42 mm in length. The total number of pixels is equivalent to (864 times 3) times 480. This HMD realizes its wide field of view and compact size by having a narrower binocular area (overlap area) than that of commercialized HMDs. For this reason, it is expected that the frequency of monocular vision will be more than that of commercialized HMDs and human natural vision. Therefore, we researched the convergent state of eyes while observing the monocular areas of this HMD by employing an EOG and considered the suitability of this HMD to human vision. As a result, it was found that the convergent state of the monocular vision was nearly equal to that of binocular vision. That is, it can be said that this HMD has the possibility of being well suited to human vision in terms of the convergence.

  9. A Practical Solution Using A New Approach To Robot Vision

    NASA Astrophysics Data System (ADS)

    Hudson, David L.

    1984-01-01

    Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.

  10. Optimality of the basic colour categories for classification

    PubMed Central

    Griffin, Lewis D

    2005-01-01

    Categorization of colour has been widely studied as a window into human language and cognition, and quite separately has been used pragmatically in image-database retrieval systems. This suggests the hypothesis that the best category system for pragmatic purposes coincides with human categories (i.e. the basic colours). We have tested this hypothesis by assessing the performance of different category systems in a machine-vision task. The task was the identification of the odd-one-out from triples of images obtained using a web-based image-search service. In each triple, two of the images had been retrieved using the same search term, the other a different term. The terms were simple concrete nouns. The results were as follows: (i) the odd-one-out task can be performed better than chance using colour alone; (ii) basic colour categorization performs better than random systems of categories; (iii) a category system that performs better than the basic colours could not be found; and (iv) it is not just the general layout of the basic colours that is important, but also the detail. We conclude that (i) the results support the plausibility of an explanation for the basic colours as a result of a pressure-to-optimality and (ii) the basic colours are good categories for machine vision image-retrieval systems. PMID:16849219

  11. Human olfaction: a constant state of change-blindness

    PubMed Central

    Sela, Lee

    2010-01-01

    Paradoxically, although humans have a superb sense of smell, they don’t trust their nose. Furthermore, although human odorant detection thresholds are very low, only unusually high odorant concentrations spontaneously shift our attention to olfaction. Here we suggest that this lack of olfactory awareness reflects the nature of olfactory attention that is shaped by the spatial and temporal envelopes of olfaction. Regarding the spatial envelope, selective attention is allocated in space. Humans direct an attentional spotlight within spatial coordinates in both vision and audition. Human olfactory spatial abilities are minimal. Thus, with no olfactory space, there is no arena for olfactory selective attention. Regarding the temporal envelope, whereas vision and audition consist of nearly continuous input, olfactory input is discreet, made of sniffs widely separated in time. If similar temporal breaks are artificially introduced to vision and audition, they induce “change blindness”, a loss of attentional capture that results in a lack of awareness to change. Whereas “change blindness” is an aberration of vision and audition, the long inter-sniff-interval renders “change anosmia” the norm in human olfaction. Therefore, attentional capture in olfaction is minimal, as is human olfactory awareness. All this, however, does not diminish the role of olfaction through sub-attentive mechanisms allowing subliminal smells a profound influence on human behavior and perception. PMID:20603708

  12. Step 1: Human System Interface (HSI) Functional Requirements Document (FRD). Version 2

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This Functional Requirements Document (FRD) establishes a minimum set of Human System Interface (HSI) functional requirements to achieve the Access 5 Vision of "operating High Altitude, Long Endurance (HALE) Unmanned Aircraft Systems (UAS) routinely, safely, and reliably in the National Airspace System (NAS)". Basically, it provides what functions are necessary to fly UAS in the NAS. The framework used to identify the appropriate functions was the "Aviate, Navigate, Communicate, and Avoid Hazards" structure identified in the Access 5 FRD. As a result, fifteen high-level functional requirements were developed. In addition, several of them have been decomposed into low-level functional requirements to provide more detail.

  13. Genetic Testing as a New Standard for Clinical Diagnosis of Color Vision Deficiencies.

    PubMed

    Davidoff, Candice; Neitz, Maureen; Neitz, Jay

    2016-09-01

    The genetics underlying inherited color vision deficiencies is well understood: causative mutations change the copy number or sequence of the long (L), middle (M), or short (S) wavelength sensitive cone opsin genes. This study evaluated the potential of opsin gene analyses for use in clinical diagnosis of color vision defects. We tested 1872 human subjects using direct sequencing of opsin genes and a novel genetic assay that characterizes single nucleotide polymorphisms (SNPs) using the MassArray system. Of the subjects, 1074 also were given standard psychophysical color vision tests for a direct comparison with current clinical methods. Protan and deutan deficiencies were classified correctly in all subjects identified by MassArray as having red-green defects. Estimates of defect severity based on SNPs that control photopigment spectral tuning correlated with estimates derived from Nagel anomaloscopy. The MassArray assay provides genetic information that can be useful in the diagnosis of inherited color vision deficiency including presence versus absence, type, and severity, and it provides information to patients about the underlying pathobiology of their disease. The MassArray assay provides a method that directly analyzes the molecular substrates of color vision that could be used in combination with, or as an alternative to current clinical diagnosis of color defects.

  14. Genetic Testing as a New Standard for Clinical Diagnosis of Color Vision Deficiencies

    PubMed Central

    Davidoff, Candice; Neitz, Maureen; Neitz, Jay

    2016-01-01

    Purpose The genetics underlying inherited color vision deficiencies is well understood: causative mutations change the copy number or sequence of the long (L), middle (M), or short (S) wavelength sensitive cone opsin genes. This study evaluated the potential of opsin gene analyses for use in clinical diagnosis of color vision defects. Methods We tested 1872 human subjects using direct sequencing of opsin genes and a novel genetic assay that characterizes single nucleotide polymorphisms (SNPs) using the MassArray system. Of the subjects, 1074 also were given standard psychophysical color vision tests for a direct comparison with current clinical methods. Results Protan and deutan deficiencies were classified correctly in all subjects identified by MassArray as having red–green defects. Estimates of defect severity based on SNPs that control photopigment spectral tuning correlated with estimates derived from Nagel anomaloscopy. Conclusions The MassArray assay provides genetic information that can be useful in the diagnosis of inherited color vision deficiency including presence versus absence, type, and severity, and it provides information to patients about the underlying pathobiology of their disease. Translational Relevance The MassArray assay provides a method that directly analyzes the molecular substrates of color vision that could be used in combination with, or as an alternative to current clinical diagnosis of color defects. PMID:27622081

  15. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  16. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-27

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.

  17. Robot vision system programmed in Prolog

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Hack, Ralf

    1995-10-01

    This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)

  18. Autonomic Computing: Panacea or Poppycock?

    NASA Technical Reports Server (NTRS)

    Sterritt, Roy; Hinchey, Mike

    2005-01-01

    Autonomic Computing arose out of a need for a means to cope with rapidly growing complexity of integrating, managing, and operating computer-based systems as well as a need to reduce the total cost of ownership of today's systems. Autonomic Computing (AC) as a discipline was proposed by IBM in 2001, with the vision to develop self-managing systems. As the name implies, the influence for the new paradigm is the human body's autonomic system, which regulates vital bodily functions such as the control of heart rate, the body's temperature and blood flow-all without conscious effort. The vision is to create selfivare through self-* properties. The initial set of properties, in terms of objectives, were self-configuring, self-healing, self-optimizing and self-protecting, along with attributes of self-awareness, self-monitoring and self-adjusting. This self-* list has grown: self-anticipating, self-critical, self-defining, self-destructing, self-diagnosis, self-governing, self-organized, self-reflecting, and self-simulation, for instance.

  19. Exploiting spatio-temporal characteristics of human vision for mobile video applications

    NASA Astrophysics Data System (ADS)

    Jillani, Rashad; Kalva, Hari

    2008-08-01

    Video applications on handheld devices such as smart phones pose a significant challenge to achieve high quality user experience. Recent advances in processor and wireless networking technology are producing a new class of multimedia applications (e.g. video streaming) for mobile handheld devices. These devices are light weight and have modest sizes, and therefore very limited resources - lower processing power, smaller display resolution, lesser memory, and limited battery life as compared to desktop and laptop systems. Multimedia applications on the other hand have extensive processing requirements which make the mobile devices extremely resource hungry. In addition, the device specific properties (e.g. display screen) significantly influence the human perception of multimedia quality. In this paper we propose a saliency based framework that exploits the structure in content creation as well as the human vision system to find the salient points in the incoming bitstream and adapt it according to the target device, thus improving the quality of new adapted area around salient points. Our experimental results indicate that the adaptation process that is cognizant of video content and user preferences can produce better perceptual quality video for mobile devices. Furthermore, we demonstrated how such a framework can affect user experience on a handheld device.

  20. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  1. Advances in understanding the molecular basis of the first steps in color vision.

    PubMed

    Hofmann, Lukas; Palczewski, Krzysztof

    2015-11-01

    Serving as one of our primary environmental inputs, vision is the most sophisticated sensory system in humans. Here, we present recent findings derived from energetics, genetics and physiology that provide a more advanced understanding of color perception in mammals. Energetics of cis-trans isomerization of 11-cis-retinal accounts for color perception in the narrow region of the electromagnetic spectrum and how human eyes can absorb light in the near infrared (IR) range. Structural homology models of visual pigments reveal complex interactions of the protein moieties with the light sensitive chromophore 11-cis-retinal and that certain color blinding mutations impair secondary structural elements of these G protein-coupled receptors (GPCRs). Finally, we identify unsolved critical aspects of color tuning that require future investigation. Copyright © 2015. Published by Elsevier Ltd.

  2. Single unit approaches to human vision and memory.

    PubMed

    Kreiman, Gabriel

    2007-08-01

    Research on the visual system focuses on using electrophysiology, pharmacology and other invasive tools in animal models. Non-invasive tools such as scalp electroencephalography and imaging allow examining humans but show a much lower spatial and/or temporal resolution. Under special clinical conditions, it is possible to monitor single-unit activity in humans when invasive procedures are required due to particular pathological conditions including epilepsy and Parkinson's disease. We review our knowledge about the visual system and visual memories in the human brain at the single neuron level. The properties of the human brain seem to be broadly compatible with the knowledge derived from animal models. The possibility of examining high-resolution brain activity in conscious human subjects allows investigators to ask novel questions that are challenging to address in animal models.

  3. Topology of Functional Connectivity and Hub Dynamics in the Beta Band As Temporal Prior for Natural Vision in the Human Brain.

    PubMed

    Betti, Viviana; Corbetta, Maurizio; de Pasquale, Francesco; Wens, Vincent; Della Penna, Stefania

    2018-04-11

    Networks hubs represent points of convergence for the integration of information across many different nodes and systems. Although a great deal is known on the topology of hub regions in the human brain, little is known about their temporal dynamics. Here, we examine the static and dynamic centrality of hub regions when measured in the absence of a task (rest) or during the observation of natural or synthetic visual stimuli. We used Magnetoencephalography (MEG) in humans (both sexes) to measure static and transient regional and network-level interaction in α- and β-band limited power (BLP) in three conditions: visual fixation (rest), viewing of movie clips (natural vision), and time-scrambled versions of the same clips (scrambled vision). Compared with rest, we observed in both movie conditions a robust decrement of α-BLP connectivity. Moreover, both movie conditions caused a significant reorganization of connections in the α band, especially between networks. In contrast, β-BLP connectivity was remarkably similar between rest and natural vision. Not only the topology did not change, but the joint dynamics of hubs in a core network during natural vision was predicted by similar fluctuations in the resting state. We interpret these findings by suggesting that slow-varying fluctuations of integration occurring in higher-order regions in the β band may be a mechanism to anticipate and predict slow-varying temporal patterns of the visual environment. SIGNIFICANCE STATEMENT A fundamental question in neuroscience concerns the function of spontaneous brain connectivity. Here, we tested the hypothesis that topology of intrinsic brain connectivity and its dynamics might predict those observed during natural vision. Using MEG, we tracked the static and time-varying brain functional connectivity when observers were either fixating or watching different movie clips. The spatial distribution of connections and the dynamics of centrality of a set of regions were similar during rest and movie in the β band, but not in the α band. These results support the hypothesis that the intrinsic β-rhythm integration occurs with a similar temporal structure during natural vision, possibly providing advanced information about incoming stimuli. Copyright © 2018 the authors 0270-6474/18/383858-14$15.00/0.

  4. Proceedings of the 1986 IEEE international conference on systems, man and cybernetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1986-01-01

    This book presents the papers given at a conference on man-machine systems. Topics considered at the conference included neural model-based cognitive theory and engineering, user interfaces, adaptive and learning systems, human interaction with robotics, decision making, the testing and evaluation of expert systems, software development, international conflict resolution, intelligent interfaces, automation in man-machine system design aiding, knowledge acquisition in expert systems, advanced architectures for artificial intelligence, pattern recognition, knowledge bases, and machine vision.

  5. The Cyborg Astrobiologist: scouting red beds for uncommon features with geological significance

    NASA Astrophysics Data System (ADS)

    McGuire, Patrick Charles; Díaz-Martínez, Enrique; Ormö, Jens; Gómez-Elvira, Javier; Rodríguez-Manfredi, José Antonio; Sebastián-Martínez, Eduardo; Ritter, Helge; Haschke, Robert; Oesker, Markus; Ontrup, Jörg

    2005-04-01

    The `Cyborg Astrobiologist' has undergone a second geological field trial, at a site in northern Guadalajara, Spain, near Riba de Santiuste. The site at Riba de Santiuste is dominated by layered deposits of red sandstones. The Cyborg Astrobiologist is a wearable computer and video camera system that has demonstrated a capability to find uncommon interest points in geological imagery in real time in the field. In this second field trial, the computer vision system of the Cyborg Astrobiologist was tested at seven different tripod positions, on three different geological structures. The first geological structure was an outcrop of nearly homogeneous sandstone, which exhibits oxidized-iron impurities in red areas and an absence of these iron impurities in white areas. The white areas in these `red beds' have turned white because the iron has been removed. The iron removal from the sandstone can proceed once the iron has been chemically reduced, perhaps by a biological agent. In one instance the computer vision system found several (iron-free) white spots to be uncommon and therefore interesting, as well as several small and dark nodules. The second geological structure was another outcrop some 600 m to the east, with white, textured mineral deposits on the surface of the sandstone, at the bottom of the outcrop. The computer vision system found these white, textured mineral deposits to be interesting. We acquired samples of the mineral deposits for geochemical analysis in the laboratory. This laboratory analysis of the crust identifies a double layer, consisting of an internal millimetre-size layering of calcite and an external centimetre-size efflorescence of gypsum. The third geological structure was a 50 cm thick palaeosol layer, with fossilized root structures of some plants. The computer vision system also found certain areas of these root structures to be interesting. A quasi-blind comparison of the Cyborg Astrobiologist's interest points for these images with the interest points determined afterwards by a human geologist shows that the Cyborg Astrobiologist concurred with the human geologist 68% of the time (true-positive rate), with a 32% false-positive rate and a 32% false-negative rate. The performance of the Cyborg Astrobiologist's computer vision system was by no means perfect, so there is plenty of room for improvement. However, these tests validate the image-segmentation and uncommon-mapping technique that we first employed at a different geological site (Rivas Vaciamadrid) with somewhat different properties for the imagery.

  6. A Developmental Approach to Machine Learning?

    PubMed Central

    Smith, Linda B.; Slone, Lauren K.

    2017-01-01

    Visual learning depends on both the algorithms and the training material. This essay considers the natural statistics of infant- and toddler-egocentric vision. These natural training sets for human visual object recognition are very different from the training data fed into machine vision systems. Rather than equal experiences with all kinds of things, toddlers experience extremely skewed distributions with many repeated occurrences of a very few things. And though highly variable when considered as a whole, individual views of things are experienced in a specific order – with slow, smooth visual changes moment-to-moment, and developmentally ordered transitions in scene content. We propose that the skewed, ordered, biased visual experiences of infants and toddlers are the training data that allow human learners to develop a way to recognize everything, both the pervasively present entities and the rarely encountered ones. The joint consideration of real-world statistics for learning by researchers of human and machine learning seems likely to bring advances in both disciplines. PMID:29259573

  7. Spatial Context Learning Survives Interference from Working Memory Load

    ERIC Educational Resources Information Center

    Vickery, Timothy J.; Sussman, Rachel S.; Jiang, Yuhong V.

    2010-01-01

    The human visual system is constantly confronted with an overwhelming amount of information, only a subset of which can be processed in complete detail. Attention and implicit learning are two important mechanisms that optimize vision. This study addressed the relationship between these two mechanisms. Specifically we asked, Is implicit learning…

  8. The House That Jones Built.

    ERIC Educational Resources Information Center

    Rist, Marilee C.

    1992-01-01

    Describes lifelong commitment of middle-school principal and major W.J. Jones to Coahoma, a small town in Mississippi Delta. Thanks to his efforts, town recently acquired a sewage system, blacktopped roads, and new housing (through Habitat for Humanity and World Vision). Although town elementary school fell victim to consolidation and children are…

  9. 76 FR 69743 - The Development and Evaluation of Human Cytomegalovirus Vaccines; Public Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-09

    ... immune systems. Congenital HCMV infection causes mental retardation, learning disabilities, hearing loss, vision loss, and other disabilities. Patients undergoing stem cell or solid-organ transplants are at... available basis beginning at 8 a.m. If you need special accommodations due to a disability, please contact...

  10. A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space

    PubMed Central

    Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.

    2017-01-01

    Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382

  11. A vision for modernizing environmental risk assessment

    EPA Science Inventory

    In 2007, the US National Research Council (NRC) published a Vision and Strategy for [human health] Toxicity Testing in the 21st century. Central to the vision was increased reliance on high throughput in vitro testing and predictive approaches based on mechanistic understanding o...

  12. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  13. Color image processing and vision system for an automated laser paint-stripping system

    NASA Astrophysics Data System (ADS)

    Hickey, John M., III; Hise, Lawson

    1994-10-01

    Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.

  14. Head-Mounted Display Technology for Low Vision Rehabilitation and Vision Enhancement

    PubMed Central

    Ehrlich, Joshua R.; Ojeda, Lauro V.; Wicker, Donna; Day, Sherry; Howson, Ashley; Lakshminarayanan, Vasudevan; Moroi, Sayoko E.

    2017-01-01

    Purpose To describe the various types of head-mounted display technology, their optical and human factors considerations, and their potential for use in low vision rehabilitation and vision enhancement. Design Expert perspective. Methods An overview of head-mounted display technology by an interdisciplinary team of experts drawing on key literature in the field. Results Head-mounted display technologies can be classified based on their display type and optical design. See-through displays such as retinal projection devices have the greatest potential for use as low vision aids. Devices vary by their relationship to the user’s eyes, field of view, illumination, resolution, color, stereopsis, effect on head motion and user interface. These optical and human factors considerations are important when selecting head-mounted displays for specific applications and patient groups. Conclusions Head-mounted display technologies may offer advantages over conventional low vision aids. Future research should compare head-mounted displays to commonly prescribed low vision aids in order to compare their effectiveness in addressing the impairments and rehabilitation goals of diverse patient populations. PMID:28048975

  15. Parallel inputs to memory in bee colour vision.

    PubMed

    Horridge, Adrian

    2016-03-01

    In the 19(th) century, it was found that attraction of bees to light was controlled by light intensity irrespective of colour, and a few critical entomologists inferred that vision of bees foraging on flowers was unlike human colour vision. Therefore, quite justly, Professor Carl von Hess concluded in his book on the Comparative Physiology of Vision (1912) that bees do not distinguish colours in the way that humans enjoy. Immediately, Karl von Frisch, an assistant in the Zoology Department of the same University of Münich, set to work to show that indeed bees have colour vision like humans, thereby initiating a new research tradition, and setting off a decade of controversy that ended only at the death of Hess in 1923. Until 1939, several researchers continued the tradition of trying to untangle the mechanism of bee vision by repeatedly testing trained bees, but made little progress, partly because von Frisch and his legacy dominated the scene. The theory of trichromatic colour vision further developed after three types of receptors sensitive to green, blue, and ultraviolet (UV), were demonstrated in 1964 in the bee. Then, until the end of the century, all data was interpreted in terms of trichromatic colour space. Anomalies were nothing new, but eventually after 1996 they led to the discovery that bees have a previously unknown type of colour vision based on a monochromatic measure and distribution of blue and measures of modulation in green and blue receptor pathways. Meanwhile, in the 20(th) century, search for a suitable rationalization, and explorations of sterile culs-de-sac had filled the literature of bee colour vision, but were based on the wrong theory.

  16. New vision solar system mission study: Use of space reactor bimodal system with microspacecraft to determine origin and evolution of the outer plants in the solar system

    NASA Technical Reports Server (NTRS)

    Mondt, Jack F.; Zubrin, Robert M.

    1996-01-01

    The vision for the future of the planetary exploration program includes the capability to deliver 'constellations' or 'fleets' of microspacecraft to a planetary destination. These fleets will act in a coordinated manner to gather science data from a variety of locations on or around the target body, thus providing detailed, global coverage without requiring development of a single large, complex and costly spacecraft. Such constellations of spacecraft, coupled with advanced information processing and visualization techniques and high-rate communications, could provide the basis for development of a 'virtual presence' in the solar system. A goal could be the near real-time delivery of planetary images and video to a wide variety of users in the general public and the science community. This will be a major step in making the solar system accessible to the public and will help make solar system exploration a part of the human experience on Earth.

  17. Neural Network Target Identification System for False Alarm Reduction

    NASA Technical Reports Server (NTRS)

    Ye, David; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feed forward back propagation neural network (NN) is then trained to classify each feature vector and remove false positives. This paper discusses the test of the system performance and parameter optimizations process which adapts the system to various targets and datasets. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar image dataset.

  18. NASA's Exploration Architecture

    NASA Technical Reports Server (NTRS)

    Tyburski, Timothy

    2006-01-01

    A Bold Vision for Space Exploration includes: 1) Complete the International Space Station; 2) Safely fly the Space Shuttle until 2010; 3) Develop and fly the Crew Exploration Vehicle no later than 2012; 4) Return to the moon no later than 2020; 5) Extend human presence across the solar system and beyond; 6) Implement a sustained and affordable human and robotic program; 7) Develop supporting innovative technologies, knowledge, and infrastructures; and 8) Promote international and commercial participation in exploration.

  19. Effects of body lean and visual information on the equilibrium maintenance during stance.

    PubMed

    Duarte, Marcos; Zatsiorsky, Vladimir M

    2002-09-01

    Maintenance of equilibrium was tested in conditions when humans assume different leaning postures during upright standing. Subjects ( n=11) stood in 13 different body postures specified by visual center of pressure (COP) targets within their base of support (BOS). Different types of visual information were tested: continuous presentation of visual target, no vision after target presentation, and with simultaneous visual feedback of the COP. The following variables were used to describe the equilibrium maintenance: the mean of the COP position, the area of the ellipse covering the COP sway, and the resultant median frequency of the power spectral density of the COP displacement. The variability of the COP displacement, quantified by the COP area variable, increased when subjects occupied leaning postures, irrespective of the kind of visual information provided. This variability also increased when vision was removed in relation to when vision was present. Without vision, drifts in the COP data were observed which were larger for COP targets farther away from the neutral position. When COP feedback was given in addition to the visual target, the postural control system did not control stance better than in the condition with only visual information. These results indicate that the visual information is used by the postural control system at both short and long time scales.

  20. Hypothesis on human eye perceiving optical spectrum rather than an image

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Szu, Harold

    2015-05-01

    It is a common knowledge that we see the world because our eyes can perceive an optical image. A digital camera seems a good example of simulating the eye imaging system. However, the signal sensing and imaging on human retina is very complicated. There are at least five layers (of neurons) along the signal pathway: photoreceptors (cones and rods), bipolar, horizontal, amacrine and ganglion cells. To sense an optical image, it seems that photoreceptors (as sensors) plus ganglion cells (converting to electrical signals for transmission) are good enough. Image sensing does not require ununiformed distribution of photoreceptors like fovea. There are some challenging questions, for example, why don't we feel the "blind spots" (never fibers exiting the eyes)? Similar situation happens to glaucoma patients who do not feel their vision loss until 50% or more nerves died. Now our hypothesis is that human retina initially senses optical (i.e., Fourier) spectrum rather than optical image. Due to the symmetric property of Fourier spectrum the signal loss from a blind spot or the dead nerves (for glaucoma patients) can be recovered. Eye logarithmic response to input light intensity much likes displaying Fourier magnitude. The optics and structures of human eyes satisfy the needs of optical Fourier spectrum sampling. It is unsure that where and how inverse Fourier transform is performed in human vision system to obtain an optical image. Phase retrieval technique in compressive sensing domain enables image reconstruction even without phase inputs. The spectrum-based imaging system can potentially tolerate up to 50% of bad sensors (pixels), adapt to large dynamic range (with logarithmic response), etc.

  1. The Relationship Between Fusion, Suppression, and Diplopia in Normal and Amblyopic Vision.

    PubMed

    Spiegel, Daniel P; Baldwin, Alex S; Hess, Robert F

    2016-10-01

    Single vision occurs through a combination of fusion and suppression. When neither mechanism takes place, we experience diplopia. Under normal viewing conditions, the perceptual state depends on the spatial scale and interocular disparity. The purpose of this study was to examine the three perceptual states in human participants with normal and amblyopic vision. Participants viewed two dichoptically separated horizontal blurred edges with an opposite tilt (2.35°) and indicated their binocular percept: "one flat edge," "one tilted edge," or "two edges." The edges varied with scale (fine 4 min arc and coarse 32 min arc), disparity, and interocular contrast. We investigated how the binocular interactions vary in amblyopic (visual acuity [VA] > 0.2 logMAR, n = 4) and normal vision (VA ≤ 0 logMAR, n = 4) under interocular variations in stimulus contrast and luminance. In amblyopia, despite the established sensory dominance of the fellow eye, fusion prevails at the coarse scale and small disparities (75%). We also show that increasing the relative contrast to the amblyopic eye enhances the probability of fusion at the fine scale (from 18% to 38%), and leads to a reversal of the sensory dominance at coarse scale. In normal vision we found that interocular luminance imbalances disturbed binocular combination only at the fine scale in a way similar to that seen in amblyopia. Our results build upon the growing evidence that the amblyopic visual system is binocular and further show that the suppressive mechanisms rendering the amblyopic system functionally monocular are scale dependent.

  2. Human Exploration of Mars Design Reference Architecture 5.0

    NASA Technical Reports Server (NTRS)

    Drake, Bret G.; Hoffman, Stephen J.; Beaty, David W.

    2009-01-01

    This paper provides a summary of the 2007 Mars Design Reference Architecture 5.0 (DRA 5.0), which is the latest in a series of NASA Mars reference missions. It provides a vision of one potential approach to human Mars exploration including how Constellation systems can be used. The reference architecture provides a common framework for future planning of systems concepts, technology development, and operational testing as well as Mars robotic missions, research that is conducted on the International Space Station, and future lunar exploration missions. This summary the Mars DRA 5.0 provides an overview of the overall mission approach, surface strategy and exploration goals, as well as the key systems and challenges for the first three human missions to Mars.

  3. Educating on a Human Scale: Visions for a Sustainable World. Proceedings of the Human Scale Education Conference (Oxford, England, September 26, 1998).

    ERIC Educational Resources Information Center

    Carnie, Fiona, Ed.

    Human Scale Education's 1998 conference addressed the creation of schools and learning experiences to foster in young people the attitudes and skills to shape a fairer and more sustainable world. "Values and Vision in Business and Education" (Anita Roddick) argues that educational curricula must contain the language and action of social…

  4. Frame Rate and Human Vision

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2012-01-01

    To enhance the quality of the theatre experience, the film industry is interested in achieving higher frame rates for capture and display. In this talk I will describe the basic spatio-temporal sensitivities of human vision, and how they respond to the time sequence of static images that is fundamental to cinematic presentation.

  5. Can Humans Fly Action Understanding with Multiple Classes of Actors

    DTIC Science & Technology

    2015-06-08

    recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank

  6. Augmented reality with image registration, vision correction and sunlight readability via liquid crystal devices.

    PubMed

    Wang, Yu-Jen; Chen, Po-Ju; Liang, Xiao; Lin, Yi-Hsin

    2017-03-27

    Augmented reality (AR), which use computer-aided projected information to augment our sense, has important impact on human life, especially for the elder people. However, there are three major challenges regarding the optical system in the AR system, which are registration, vision correction, and readability under strong ambient light. Here, we solve three challenges simultaneously for the first time using two liquid crystal (LC) lenses and polarizer-free attenuator integrated in optical-see-through AR system. One of the LC lens is used to electrically adjust the position of the projected virtual image which is so-called registration. The other LC lens with larger aperture and polarization independent characteristic is in charge of vision correction, such as myopia and presbyopia. The linearity of lens powers of two LC lenses is also discussed. The readability of virtual images under strong ambient light is solved by electrically switchable transmittance of the LC attenuator originating from light scattering and light absorption. The concept demonstrated in this paper could be further extended to other electro-optical devices as long as the devices exhibit the capability of phase modulations and amplitude modulations.

  7. Design and Validation of Exoskeleton Actuated by Soft Modules toward Neurorehabilitation-Vision-Based Control for Precise Reaching Motion of Upper Limb.

    PubMed

    Oguntosin, Victoria W; Mori, Yoshiki; Kim, Hyejong; Nasuto, Slawomir J; Kawamura, Sadao; Hayashi, Yoshikatsu

    2017-01-01

    We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM). Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments.

  8. Design and Validation of Exoskeleton Actuated by Soft Modules toward Neurorehabilitation—Vision-Based Control for Precise Reaching Motion of Upper Limb

    PubMed Central

    Oguntosin, Victoria W.; Mori, Yoshiki; Kim, Hyejong; Nasuto, Slawomir J.; Kawamura, Sadao; Hayashi, Yoshikatsu

    2017-01-01

    We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM). Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments. PMID:28736514

  9. On-line dimensional measurement of small components on the eyeglasses assembly line

    NASA Astrophysics Data System (ADS)

    Rosati, G.; Boschetti, G.; Biondi, A.; Rossi, A.

    2009-03-01

    Dimensional measurement of the subassemblies at the beginning of the assembly line is a very crucial process for the eyeglasses industry, since even small manufacturing errors of the components can lead to very visible defects on the final product. For this reason, all subcomponents of the eyeglass are verified before beginning the assembly process either with a 100% inspection or on a statistical basis. Inspection is usually performed by human operators, with high costs and a degree of repeatability which is not always satisfactory. This paper presents a novel on-line measuring system for dimensional verification of small metallic subassemblies for the eyeglasses industry. The machine vision system proposed, which was designed to be used at the beginning of the assembly line, could also be employed in the Statistical Process Control (SPC) by the manufacturer of the subassemblies. The automated system proposed is based on artificial vision, and exploits two CCD cameras and an anthropomorphic robot to inspect and manipulate the subcomponents of the eyeglass. Each component is recognized by the first camera in a quite large workspace, picked up by the robot and placed in the small vision field of the second camera which performs the measurement process. Finally, the part is palletized by the robot. The system can be easily taught by the operator by simply placing the template object in the vision field of the measurement camera (for dimensional data acquisition) and hence by instructing the robot via the Teaching Control Pendant within the vision field of the first camera (for pick-up transformation acquisition). The major problem we dealt with is that the shape and dimensions of the subassemblies can vary in a quite wide range, but different positioning of the same component can look very similar one to another. For this reason, a specific shape recognition procedure was developed. In the paper, the whole system is presented together with first experimental lab results.

  10. Wearable Improved Vision System for Color Vision Deficiency Correction

    PubMed Central

    Riccio, Daniel; Di Perna, Luigi; Sanniti Di Baja, Gabriella; De Nino, Maurizio; Rossi, Settimio; Testa, Francesco; Simonelli, Francesca; Frucci, Maria

    2017-01-01

    Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p = 0.03$ \\end{document}) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD. PMID:28507827

  11. Micro-Inspector Spacecraft for Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Mueller, Juergen; Alkalai, Leon; Lewis, Carol

    2005-01-01

    NASA is seeking to embark on a new set of human and robotic exploration missions back to the Moon, to Mars, and destinations beyond. Key strategic technical challenges will need to be addressed to realize this new vision for space exploration, including improvements in safety and reliability to improve robustness of space operations. Under sponsorship by NASA's Exploration Systems Mission, the Jet Propulsion Laboratory (JPL), together with its partners in government (NASA Johnson Space Center) and industry (Boeing, Vacco Industries, Ashwin-Ushas Inc.) is developing an ultra-low mass (<3.0 kg) free-flying micro-inspector spacecraft in an effort to enhance safety and reduce risk in future human and exploration missions. The micro-inspector will provide remote vehicle inspections to ensure safety and reliability, or to provide monitoring of in-space assembly. The micro-inspector spacecraft represents an inherently modular system addition that can improve safety and support multiple host vehicles in multiple applications. On human missions, it may help extend the reach of human explorers, decreasing human EVA time to reduce mission cost and risk. The micro-inspector development is the continuation of an effort begun under NASA's Office of Aerospace Technology Enabling Concepts and Technology (ECT) program. The micro-inspector uses miniaturized celestial sensors; relies on a combination of solar power and batteries (allowing for unlimited operation in the sun and up to 4 hours in the shade); utilizes a low-pressure, low-leakage liquid butane propellant system for added safety; and includes multi-functional structure for high system-level integration and miniaturization. Versions of this system to be designed and developed under the H&RT program will include additional capabilities for on-board, vision-based navigation, spacecraft inspection, and collision avoidance, and will be demonstrated in a ground-based, space-related environment. These features make the micro-inspector design unique in its ability to serve crewed as well as robotic spacecraft, well beyond Earth-orbit and into arenas such as robotic missions, where human teleoperation capability is not locally available.

  12. The NASA Bed Rest Project

    NASA Technical Reports Server (NTRS)

    Rhodes, Bradley; Meck, Janice

    2005-01-01

    NASA s National Vision for Space Exploration includes human travel beyond low earth orbit and the ultimate safe return of the crews. Crucial to fulfilling the vision is the successful and timely development of countermeasures for the adverse physiological effects on human systems caused by long term exposure to the microgravity environment. Limited access to in-flight resources for the foreseeable future increases NASA s reliance on ground-based analogs to simulate these effects of microgravity. The primary analog for human based research will be head-down bed rest. By this approach NASA will be able to evaluate countermeasures in large sample sizes, perform preliminary evaluations of proposed in-flight protocols and assess the utility of individual or combined strategies before flight resources are requested. In response to this critical need, NASA has created the Bed Rest Project at the Johnson Space Center. The Project establishes the infrastructure and processes to provide a long term capability for standardized domestic bed rest studies and countermeasure development. The Bed Rest Project design takes a comprehensive, interdisciplinary, integrated approach that reduces the resource overhead of one investigator for one campaign. In addition to integrating studies operationally relevant for exploration, the Project addresses other new Vision objectives, namely: 1) interagency cooperation with the NIH allows for Clinical Research Center (CRC) facility sharing to the benefit of both agencies, 2) collaboration with our International Partners expands countermeasure development opportunities for foreign and domestic investigators as well as promotes consistency in approach and results, 3) to the greatest degree possible, the Project also advances research by clinicians and academia alike to encourage return to earth benefits. This paper will describe the Project s top level goals, organization and relationship to other Exploration Vision Projects, implementation strategy, address Project deliverables, schedules and provide a status of bed rest campaigns presently underway.

  13. Dual-Use Space Technology Transfer Conference and Exhibition. Volume 2

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar (Compiler)

    1994-01-01

    This is the second volume of papers presented at the Dual-Use Space Technology Transfer Conference and Exhibition held at the Johnson Space Center February 1-3, 1994. Possible technology transfers covered during the conference were in the areas of information access; innovative microwave and optical applications; materials and structures; marketing and barriers; intelligent systems; human factors and habitation; communications and data systems; business process and technology transfer; software engineering; biotechnology and advanced bioinstrumentation; communications signal processing and analysis; medical care; applications derived from control center data systems; human performance evaluation; technology transfer methods; mathematics, modeling, and simulation; propulsion; software analysis and decision tools; systems/processes in human support technology; networks, control centers, and distributed systems; power; rapid development; perception and vision technologies; integrated vehicle health management; automation technologies; advanced avionics; and robotics technologies.

  14. Availability of vision and tactile gating: vision enhances tactile sensitivity.

    PubMed

    Colino, Francisco L; Lee, Ji-Hang; Binsted, Gordon

    2017-01-01

    A multitude of events bombard our sensory systems at every moment of our lives. Thus, it is important for the sensory and motor cortices to gate unimportant events. Tactile suppression is a well-known phenomenon defined as a reduced ability to detect tactile events on the skin before and during movement. Previous experiments (Buckingham et al. in Exp Brain Res 201(3):411-419, 2010; Colino et al. in Physiol Rep 2(3):e00267, 2014) found detection rates decrease just prior to and during finger abduction and decrease according to the proximity of the moving effector. However, what effect does vision have on tactile gating? There is ample evidence (see Serino and Haggard in Neurosci Biobehav Rev 34:224-236, 2010) observing increased tactile acuity when participants see their limbs. The present study examined how tactile detection changes in response to visual condition (vision/no vision). Ten human participants used their right hand to reach and grasp a cylinder. Tactors were attached to the index finger and the forearm of both the right and left arm and vibrated at various epochs relative to a "go" tone. Results replicate previous findings from our laboratory (Colino et al. in Physiol Rep 2(3):e00267, 2014). Also, tactile acuity decreased when participants did not have vision. These results indicate that the vision affects the somatosensation via inputs from parietal areas (Konen and Haggard in Cereb Cortex 24(2):501-507, 2014) but does so in a reach-to-grasp context.

  15. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  16. Design and characterization of a tunable opto-mechatronic system to mimic the focusing and the regulation of illumination in the formation of images made by the human eye

    NASA Astrophysics Data System (ADS)

    Santiago-Alvarado, A.; Cruz-Félix, A.; Hernández Méndez, A.; Pérez-Maldonado, Y.; Domínguez-Osante, C.

    2015-05-01

    Tunable lenses have attracted much attention due to their potential applications in such areas like machine vision, laser projection, ophthalmology, etc. In this work we present the design of a tunable opto-mechatronic system capable of focusing and to regulate the entrance illumination that mimics the performance made by the iris and the crystalline lens of the human eye. A solid elastic lens made of PDMS has been used in order to mimic the crystalline lens and an automatic diaphragm has been used to mimic the iris of the human eye. Also, a characterization of such system has been performed with standard values of luminosity for the human eye have been taken into account to calibrate and to validate the entrance illumination levels to the overall optical system.

  17. Two-Phase Flow Technology Developed and Demonstrated for the Vision for Exploration

    NASA Technical Reports Server (NTRS)

    Sankovic, John M.; McQuillen, John B.; Lekan, Jack F.

    2005-01-01

    NASA s vision for exploration will once again expand the bounds of human presence in the universe with planned missions to the Moon and Mars. To attain the numerous goals of this vision, NASA will need to develop technologies in several areas, including advanced power-generation and thermal-control systems for spacecraft and life support. The development of these systems will have to be demonstrated prior to implementation to ensure safe and reliable operation in reduced-gravity environments. The Two-Phase Flow Facility (T(PHI) FFy) Project will provide the path to these enabling technologies for critical multiphase fluid products. The safety and reliability of future systems will be enhanced by addressing focused microgravity fluid physics issues associated with flow boiling, condensation, phase separation, and system stability, all of which are essential to exploration technology. The project--a multiyear effort initiated in 2004--will include concept development, normal-gravity testing (laboratories), reduced gravity aircraft flight campaigns (NASA s KC-135 and C-9 aircraft), space-flight experimentation (International Space Station), and model development. This project will be implemented by a team from the NASA Glenn Research Center, QSS Group, Inc., ZIN Technologies, Inc., and the Extramural Strategic Research Team composed of experts from academia.

  18. Navigation of military and space unmanned ground vehicles in unstructured terrains

    NASA Technical Reports Server (NTRS)

    Lescoe, Paul; Lavery, David; Bedard, Roger

    1991-01-01

    Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.

  19. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    PubMed Central

    Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O.; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M.; Vitiello, Nicola

    2016-01-01

    Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements. PMID:26861333

  20. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation.

    PubMed

    Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M; Vitiello, Nicola

    2016-02-05

    Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master-slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator's hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers' hands movements.

  1. MIT-NASA Workshop: Transformational Technologies

    NASA Technical Reports Server (NTRS)

    Mankins, J. C. (Editor); Christensen, C. B.; Gresham, E. C.; Simmons, A.; Mullins, C. A.

    2005-01-01

    As a space faring nation, we are at a critical juncture in the evolution of space exploration. NASA has announced its Vision for Space Exploration, a vision of returning humans to the Moon, sending robots and eventually humans to Mars, and exploring the outer solar system via automated spacecraft. However, mission concepts have become increasingly complex, with the potential to yield a wealth of scientific knowledge. Meanwhile, there are significant resource challenges to be met. Launch costs remain a barrier to routine space flight; the ever-changing fiscal and political environments can wreak havoc on mission planning; and technologies are constantly improving, and systems that were state of the art when a program began can quickly become outmoded before a mission is even launched. This Conference Publication describes the workshop and featured presentations by world-class experts presenting leading-edge technologies and applications in the areas of power and propulsion; communications; automation, robotics, computing, and intelligent systems; and transformational techniques for space activities. Workshops such as this one provide an excellent medium for capturing the broadest possible array of insights and expertise, learning from researchers in universities, national laboratories, NASA field Centers, and industry to help better our future in space.

  2. Machine vision system: a tool for quality inspection of food and agricultural products.

    PubMed

    Patel, Krishna Kumar; Kar, A; Jha, S N; Khan, M A

    2012-04-01

    Quality inspection of food and agricultural produce are difficult and labor intensive. Simultaneously, with increased expectations for food products of high quality and safety standards, the need for accurate, fast and objective quality determination of these characteristics in food products continues to grow. However, these operations generally in India are manual which is costly as well as unreliable because human decision in identifying quality factors such as appearance, flavor, nutrient, texture, etc., is inconsistent, subjective and slow. Machine vision provides one alternative for an automated, non-destructive and cost-effective technique to accomplish these requirements. This inspection approach based on image analysis and processing has found a variety of different applications in the food industry. Considerable research has highlighted its potential for the inspection and grading of fruits and vegetables, grain quality and characteristic examination and quality evaluation of other food products like bakery products, pizza, cheese, and noodles etc. The objective of this paper is to provide in depth introduction of machine vision system, its components and recent work reported on food and agricultural produce.

  3. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-01-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  4. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    NASA Astrophysics Data System (ADS)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-02-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  5. Modeling the convergence accommodation of stereo vision for binocular endoscopy.

    PubMed

    Gao, Yuanqian; Li, Jinhua; Li, Jianmin; Wang, Shuxin

    2018-02-01

    The stereo laparoscope is an important tool for achieving depth perception in robot-assisted minimally invasive surgery (MIS). A dynamic convergence accommodation algorithm is proposed to improve the viewing experience and achieve accurate depth perception. Based on the principle of the human vision system, a positional kinematic model of the binocular view system is established. The imaging plane pair is rectified to ensure that the two rectified virtual optical axes intersect at the fixation target to provide immersive depth perception. Stereo disparity was simulated with the roll and pitch movements of the binocular system. The chessboard test and the endoscopic peg transfer task were performed, and the results demonstrated the improved disparity distribution and robustness of the proposed convergence accommodation method with respect to the position of the fixation target. This method offers a new solution for effective depth perception with the stereo laparoscopes used in robot-assisted MIS. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Revolutionary Propulsion Systems for 21st Century Aviation

    NASA Technical Reports Server (NTRS)

    Sehra, Arun K.; Shin, Jaiwon

    2003-01-01

    The air transportation for the new millennium will require revolutionary solutions to meeting public demand for improving safety, reliability, environmental compatibility, and affordability. NASA's vision for 21st Century Aircraft is to develop propulsion systems that are intelligent, virtually inaudible (outside the airport boundaries), and have near zero harmful emissions (CO2 and Knox). This vision includes intelligent engines that will be capable of adapting to changing internal and external conditions to optimally accomplish the mission with minimal human intervention. The distributed vectored propulsion will replace two to four wing mounted or fuselage mounted engines by a large number of small, mini, or micro engines, and the electric drive propulsion based on fuel cell power will generate electric power, which in turn will drive propulsors to produce the desired thrust. Such a system will completely eliminate the harmful emissions. This paper reviews future propulsion and power concepts that are currently under development at NASA Glenn Research Center.

  7. Grounding Robot Autonomy in Emotion and Self-awareness

    NASA Astrophysics Data System (ADS)

    Sanz, Ricardo; Hernández, Carlos; Hernando, Adolfo; Gómez, Jaime; Bermejo, Julita

    Much is being done in an attempt to transfer emotional mechanisms from reverse-engineered biology into social robots. There are two basic approaches: the imitative display of emotion —e.g. to intend more human-like robots— and the provision of architectures with intrinsic emotion —in the hope of enhancing behavioral aspects. This paper focuses on the second approach, describing a core vision regarding the integration of cognitive, emotional and autonomic aspects in social robot systems. This vision has evolved as a result of the efforts in consolidating the models extracted from rat emotion research and their implementation in technical use cases based on a general systemic analysis in the framework of the ICEA and C3 projects. The desire for generality of the approach intends obtaining universal theories of integrated —autonomic, emotional, cognitive— behavior. The proposed conceptualizations and architectural principles are then captured in a theoretical framework: ASys — The Autonomous Systems Framework.

  8. Developing Crew Health Care and Habitability Systems for the Exploration Vision

    NASA Technical Reports Server (NTRS)

    Laurini, Kathy; Sawin, Charles F.

    2006-01-01

    This paper will discuss the specific mission architectures associated with the NASA Exploration Vision and review the challenges and drivers associated with developing crew health care and habitability systems to manage human system risks. Crew health care systems must be provided to manage crew health within acceptable limits, as well as respond to medical contingencies that may occur during exploration missions. Habitability systems must enable crew performance for the tasks necessary to support the missions. During the summer of 2005, NASA defined its exploration architecture including blueprints for missions to the moon and to Mars. These mission architectures require research and technology development to focus on the operational risks associated with each mission, as well as the risks to long term astronaut health. This paper will review the highest priority risks associated with the various missions and discuss NASA s strategies and plans for performing the research and technology development necessary to manage the risks to acceptable levels.

  9. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  10. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  11. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision.

    PubMed

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-22

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on-off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  12. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision

    NASA Astrophysics Data System (ADS)

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-01

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on–off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  13. Fuzzy classification for strawberry diseases-infection using machine vision and soft-computing techniques

    NASA Astrophysics Data System (ADS)

    Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil

    2018-04-01

    Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.

  14. Quality grading of Atlantic salmon (Salmo salar) by computer vision.

    PubMed

    Misimi, E; Erikson, U; Skavhaug, A

    2008-06-01

    In this study, we present a promising method of computer vision-based quality grading of whole Atlantic salmon (Salmo salar). Using computer vision, it was possible to differentiate among different quality grades of Atlantic salmon based on the external geometrical information contained in the fish images. Initially, before the image acquisition, the fish were subjectively graded and labeled into grading classes by a qualified human inspector in the processing plant. Prior to classification, the salmon images were segmented into binary images, and then feature extraction was performed on the geometrical parameters of the fish from the grading classes. The classification algorithm was a threshold-based classifier, which was designed using linear discriminant analysis. The performance of the classifier was tested by using the leave-one-out cross-validation method, and the classification results showed a good agreement between the classification done by human inspectors and by the computer vision. The computer vision-based method classified correctly 90% of the salmon from the data set as compared with the classification by human inspector. Overall, it was shown that computer vision can be used as a powerful tool to grade Atlantic salmon into quality grades in a fast and nondestructive manner by a relatively simple classifier algorithm. The low cost of implementation of today's advanced computer vision solutions makes this method feasible for industrial purposes in fish plants as it can replace manual labor, on which grading tasks still rely.

  15. Sensing Super-Position: Human Sensing Beyond the Visual Spectrum

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2007-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems.

  16. Methods for Measuring Characteristics of Night Vision Goggles

    DTIC Science & Technology

    1993-10-01

    TA 18 6. AUTHOR(S) WU 07 Harry L. Task Peter L. Marasco Richard T. Hartman, Capt Annette R. Zobel 7. PEP.FORMING...DTIC S SELECTE T Harry L. Task 3 MR 16 1994 R Richard T. Hartman 0 Peter L. Mardsco F N CREW SYSTEMS DIRECTORATE G HUMAN ENGINEERING DIVX[SIONWRIGHT-PAT

  17. The orphan nuclear receptor Tlx regulates Pax2 and is essential for vision.

    PubMed

    Yu, R T; Chiang, M Y; Tanabe, T; Kobayashi, M; Yasuda, K; Evans, R M; Umesono, K

    2000-03-14

    Although the development of the vertebrate eye is well described, the number of transcription factors known to be key to this process is still limited. The localized expression of the orphan nuclear receptor Tlx in the optic cup and discrete parts of the central nervous system suggested the possible role of Tlx in the formation or function of these structures. Analyses of Tlx targeted mice revealed that, in addition to the central nervous system cortical defects, lack of Tlx function results in progressive retinal and optic nerve degeneration with associated blindness. An extensive screen of Tlx-positive and Tlx-negative P19 neural precursors identified Pax2 as a candidate target gene. This identification is significant, because Pax2 is known to be involved in retinal development in both the human and the mouse eye. We find that Pax2 is a direct target and that the Tlx binding site in its promoter is conserved between mouse and human. These studies show that Tlx is a key component of retinal development and vision and an upstream regulator of the Pax2 signaling cascade.

  18. The orphan nuclear receptor Tlx regulates Pax2 and is essential for vision

    PubMed Central

    Yu, Ruth T.; Chiang, Ming-Yi; Tanabe, Teruyo; Kobayashi, Mime; Yasuda, Kunio; Evans, Ronald M.; Umesono, Kazuhiko

    2000-01-01

    Although the development of the vertebrate eye is well described, the number of transcription factors known to be key to this process is still limited. The localized expression of the orphan nuclear receptor Tlx in the optic cup and discrete parts of the central nervous system suggested the possible role of Tlx in the formation or function of these structures. Analyses of Tlx targeted mice revealed that, in addition to the central nervous system cortical defects, lack of Tlx function results in progressive retinal and optic nerve degeneration with associated blindness. An extensive screen of Tlx-positive and Tlx-negative P19 neural precursors identified Pax2 as a candidate target gene. This identification is significant, because Pax2 is known to be involved in retinal development in both the human and the mouse eye. We find that Pax2 is a direct target and that the Tlx binding site in its promoter is conserved between mouse and human. These studies show that Tlx is a key component of retinal development and vision and an upstream regulator of the Pax2 signaling cascade. PMID:10706625

  19. Processing of Space Resources to Enable the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Curreri, Peter A.

    2006-01-01

    The NASA human exploration program as directed by the Vision for Exploration (G.W. Bush, Jan. 14,2004) includes developing methods to process materials on the Moon and beyond to enable safe and affordable human exploration. Processing space resources was first popularized (O Neill 1976) as a technically viable, economically feasible means to build city sized habitats and multi GWatt solar power satellites in Earth/Moon space. Although NASA studies found the concepts to be technically reasonable in the post Apollo era (AMES 1979), the front end costs the limits of national or corporate investment. In the last decade analysis of space on has shown it to be economically justifiable even on a relatively small mission or commercial scenario basis. The Mars Reference Mission analysis (JSC 1997) demonstrated that production of return propellant on Mars can enable an order of magnitude decrease in the costs of human Mars missions. Analysis (by M. Duke 2003) shows that production of propellant on the Moon for the Earth based satellite industries can be commercially viable after a human lunar base is established. Similar economic analysis (Rapp 2005) also shows large cost benefits for lunar propellant production for Mars missions and for the use of lunar materials for the production of photovoltaic power (Freundlich 2005). Recent technologies could enable much smaller initial costs, to achieve mass, energy, and life support self sufficiency, than were achievable in the 1970s. If the Exploration Vision program is executed with a front end emphasis on space resources, it could provide a path for human self reliance beyond Earth orbit. This path can lead to an open, non-zero-sum, future for humanity with safer human competition with limitless growth potential. This paper discusses extension of the analysis for space resource utilization, to determine the minimum systems necessary for human self sufficiency and growth off Earth. Such a approach can provide a more compelling and comprehensive path to space resource utilization.

  20. Biological Basis For Computer Vision: Some Perspectives

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.

    1990-03-01

    Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.

  1. Biological basis for space-variant sensor design I: parameters of monkey and human spatial vision

    NASA Astrophysics Data System (ADS)

    Rojer, Alan S.; Schwartz, Eric L.

    1991-02-01

    Biological sensor design has long provided inspiration for sensor design in machine vision. However relatively little attention has been paid to the actual design parameters provided by biological systems as opposed to the general nature of biological vision architectures. In the present paper we will provide a review of current knowledge of primate spatial vision design parameters and will present recent experimental and modeling work from our lab which demonstrates that a numerical conformal mapping which is a refinement of our previous complex logarithmic model provides the best current summary of this feature of the primate visual system. In this paper we will review recent work from our laboratory which has characterized some of the spatial architectures of the primate visual system. In particular we will review experimental and modeling studies which indicate that: . The global spatial architecture of primate visual cortex is well summarized by a numerical conformal mapping whose simplest analytic approximation is the complex logarithm function . The columnar sub-structure of primate visual cortex can be well summarized by a model based on a band-pass filtered white noise. We will also refer to ongoing work in our lab which demonstrates that: . The joint columnar/map structure of primate visual cortex can be modeled and summarized in terms of a new algorithm the ''''proto-column'''' algorithm. This work provides a reference-point for current engineering approaches to novel architectures for

  2. An Operationally Based Vision Assessment Simulator for Domes

    NASA Technical Reports Server (NTRS)

    Archdeacon, John; Gaska, James; Timoner, Samson

    2012-01-01

    The Operational Based Vision Assessment (OBVA) simulator was designed and built by NASA and the United States Air Force (USAF) to provide the Air Force School of Aerospace Medicine (USAFSAM) with a scientific testing laboratory to study human vision and testing standards in an operationally relevant environment. This paper describes the general design objectives and implementation characteristics of the simulator visual system being created to meet these requirements. A key design objective for the OBVA research simulator is to develop a real-time computer image generator (IG) and display subsystem that can display and update at 120 frame s per second (design target), or at a minimum, 60 frames per second, with minimal transport delay using commercial off-the-shelf (COTS) technology. There are three key parts of the OBVA simulator that are described in this paper: i) the real-time computer image generator, ii) the various COTS technology used to construct the simulator, and iii) the spherical dome display and real-time distortion correction subsystem. We describe the various issues, possible COTS solutions, and remaining problem areas identified by NASA and the USAF while designing and building the simulator for future vision research. We also describe the critically important relationship of the physical display components including distortion correction for the dome consistent with an objective of minimizing latency in the system. The performance of the automatic calibration system used in the dome is also described. Various recommendations for possible future implementations shall also be discussed.

  3. Human Exploration of Mars Design Reference Architecture 5.0

    NASA Technical Reports Server (NTRS)

    Drake, Bret G.

    2010-01-01

    This paper provides a summary of the Mars Design Reference Architecture 5.0 (DRA 5.0), which is the latest in a series of NASA Mars reference missions. It provides a vision of one potential approach to human Mars exploration. The reference architecture provides a common framework for future planning of systems concepts, technology development, and operational testing as well as Mars robotic missions, research that is conducted on the International Space Station, and future lunar exploration missions. This summary the Mars DRA 5.0 provides an overview of the overall mission approach, surface strategy and exploration goals, as well as the key systems and challenges for the first three human missions to Mars.

  4. Multi-modal low cost mobile indoor surveillance system on the Robust Artificial Intelligence-based Defense Electro Robot (RAIDER)

    NASA Astrophysics Data System (ADS)

    Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.

  5. The Road to Certainty and Back.

    PubMed

    Westheimer, Gerald

    2016-10-14

    The author relates his intellectual journey from eye-testing clinician to experimental vision scientist. Starting with the quest for underpinning in physics and physiology of vague clinical propositions and of psychology's acceptance of thresholds as "fuzzy-edged," and a long career pursuing a reductionist agenda in empirical vision science, his journey led to the realization that the full understanding of human vision cannot proceed without factoring in an observer's awareness, with its attendant uncertainty and open-endedness. He finds support in the loss of completeness, finality, and certainty revealed in fundamental twentieth-century formulations of mathematics and physics. Just as biology prospered with the introduction of the emergent, nonreductionist concepts of evolution, vision science has to become comfortable accepting data and receiving guidance from human observers' conscious visual experience.

  6. OBPR Product Lines, Human Research Initiative, and Physics Roadmap for Exploration

    NASA Technical Reports Server (NTRS)

    Israelsson, Ulf

    2004-01-01

    The pace of change has increased at NASA. OBPR s focus is now on the Human interface as it relates to the new Exploration vision. The fundamental physics community must demonstrate how we can contribute. Many opportunities exist for physicists to participate in addressing NASA's cross-disciplinary exploration challenges: a) Physicists can contribute to elucidating basic operating principles for complex biological systems; b) Physics technologies can contribute to developing miniature sensors and systems required for manned missions to Mars. NASA Codes other than OBPR may be viable sources of funding for physics research.

  7. Food color is in the eye of the beholder: the role of human trichromatic vision in food evaluation.

    PubMed

    Foroni, Francesco; Pergola, Giulio; Rumiati, Raffaella Ida

    2016-11-14

    Non-human primates evaluate food quality based on brightness of red and green shades of color, with red signaling higher energy or greater protein content in fruits and leafs. Despite the strong association between food and other sensory modalities, humans, too, estimate critical food features, such as calorie content, from vision. Previous research primarily focused on the effects of color on taste/flavor identification and intensity judgments. However, whether evaluation of perceived calorie content and arousal in humans are biased by color has received comparatively less attention. In this study we showed that color content of food images predicts arousal and perceived calorie content reported when viewing food even when confounding variables were controlled for. Specifically, arousal positively co-varied with red-brightness, while green-brightness was negatively associated with arousal and perceived calorie content. This result holds for a large array of food comprising of natural food - where color likely predicts calorie content - and of transformed food where, instead, color is poorly diagnostic of energy content. Importantly, this pattern does not emerged with nonfood items. We conclude that in humans visual inspection of food is central to its evaluation and seems to partially engage the same basic system as non-human primates.

  8. Human Operator Interface with FLIR Displays.

    DTIC Science & Technology

    1980-03-01

    model (Ratches, et al., 1976) used to evaluate FUIR system performanmce. SECURITY CLASSIFICATION OF THIS PAOE(When Does Bntoff. PREFACE The research...the minimum resolv- able temperature (MRT) paradigm to test two modeled FLIR systems. Twelve male subjects with 20/20 uncorrected vision served as...varying iv levels of size, contrast, noise, and MTF. The test results were compared with the NVL predictive model (Ratches, et al., 1975) used to

  9. Haptic Exploration in Humans and Machines: Attribute Integration and Machine Recognition/Implementation.

    DTIC Science & Technology

    1988-04-30

    side it necessary and Identify’ by’ block n~nmbot) haptic hand, touch , vision, robot, object recognition, categorization 20. AGSTRPACT (Continue an...established that the haptic system has remarkable capabilities for object recognition. We define haptics as purposive touch . The basic tactual system...gathered ratings of the importance of dimensions for categorizing common objects by touch . Texture and hardness ratings strongly co-vary, which is

  10. Alternatives for Future U.S. Space-Launch Capabilities

    DTIC Science & Technology

    2006-10-01

    directive issued on January 14, 2004—called the new Vision for Space Exploration (VSE)—set out goals for future exploration of the solar system using...of the solar system using manned spacecraft. Among those goals was a proposal to return humans to the moon no later than 2020. The ultimate goal...U.S. launch capacity exclude the Sea Launch system operated by Boeing in partnership with RSC- Energia (based in Moscow), Kvaerner ASA (based in Oslo

  11. Progress in computer vision.

    NASA Astrophysics Data System (ADS)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  12. Comparing visual representations across human fMRI and computational vision

    PubMed Central

    Leeds, Daniel D.; Seibert, Darren A.; Pyles, John A.; Tarr, Michael J.

    2013-01-01

    Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from “interest points,” was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation. PMID:24273227

  13. IEEE 1982. Proceedings of the international conference on cybernetics and society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-01-01

    The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.

  14. Determining consequences of retinal membrane guanylyl cyclase (RetGC1) deficiency in human Leber congenital amaurosis en route to therapy: residual cone-photoreceptor vision correlates with biochemical properties of the mutants

    PubMed Central

    Jacobson, Samuel G.; Cideciyan, Artur V.; Peshenko, Igor V.; Sumaroka, Alexander; Olshevskaya, Elena V.; Cao, Lihui; Schwartz, Sharon B.; Roman, Alejandro J.; Olivares, Melani B.; Sadigh, Sam; Yau, King-Wai; Heon, Elise; Stone, Edwin M.; Dizhoor, Alexander M.

    2013-01-01

    The GUCY2D gene encodes retinal membrane guanylyl cyclase (RetGC1), a key component of the phototransduction machinery in photoreceptors. Mutations in GUCY2D cause Leber congenital amaurosis type 1 (LCA1), an autosomal recessive human retinal blinding disease. The effects of RetGC1 deficiency on human rod and cone photoreceptor structure and function are currently unknown. To move LCA1 closer to clinical trials, we characterized a cohort of patients (ages 6 months—37 years) with GUCY2D mutations. In vivo analyses of retinal architecture indicated intact rod photoreceptors in all patients but abnormalities in foveal cones. By functional phenotype, there were patients with and those without detectable cone vision. Rod vision could be retained and did not correlate with the extent of cone vision or age. In patients without cone vision, rod vision functioned unsaturated under bright ambient illumination. In vitro analyses of the mutant alleles showed that in addition to the major truncation of the essential catalytic domain in RetGC1, some missense mutations in LCA1 patients result in a severe loss of function by inactivating its catalytic activity and/or ability to interact with the activator proteins, GCAPs. The differences in rod sensitivities among patients were not explained by the biochemical properties of the mutants. However, the RetGC1 mutant alleles with remaining biochemical activity in vitro were associated with retained cone vision in vivo. We postulate a relationship between the level of RetGC1 activity and the degree of cone vision abnormality, and argue for cone function being the efficacy outcome in clinical trials of gene augmentation therapy in LCA1. PMID:23035049

  15. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  16. A blind human expert echolocator shows size constancy for objects perceived by echoes.

    PubMed

    Milne, Jennifer L; Anello, Mimma; Goodale, Melvyn A; Thaler, Lore

    2015-01-01

    Some blind humans make clicking noises with their mouth and use the reflected echoes to perceive objects and surfaces. This technique can operate as a crude substitute for vision, allowing human echolocators to perceive silent, distal objects. Here, we tested if echolocation would, like vision, show size constancy. To investigate this, we asked a blind expert echolocator (EE) to echolocate objects of different physical sizes presented at different distances. The EE consistently identified the true physical size of the objects independent of distance. In contrast, blind and blindfolded sighted controls did not show size constancy, even when encouraged to use mouth clicks, claps, or other signals. These findings suggest that size constancy is not a purely visual phenomenon, but that it can operate via an auditory-based substitute for vision, such as human echolocation.

  17. A Blueprint of an International Lunar Robotic Village

    NASA Technical Reports Server (NTRS)

    Alkalai, Leon

    2012-01-01

    Human civilization is destined to look, find and develop a second habitable destination in our Solar System, besides Earth: Moon and Mars are the two most likely and credible places based on proximity, available local resources and economics Recent international missions have brought back valuable information on both Moon and Mars. The vision is: A permanent presence on the Moon using advanced robotic systems as precursors to the future human settlement of the Moon is possible in the near-term. An international effort should be initiated to create a permanent robotic village to demonstrate and validate advanced technologies and systems across international boundaries, conduct broad science, explore new regions of the Moon and Mars, develop infrastructure, human habitats and shelters, facilitate development of commerce and stimulate public involvement and education.

  18. Using "Competing Visions of Human Rights" in an International IB World School

    ERIC Educational Resources Information Center

    Tolley, William J.

    2013-01-01

    William Tolley, a teaching fellow with the Choices Program, is the Learning and Innovation Coach and head of history at the International School of Curitiba, Brazil (IB). He writes in this article that he has found that the "Competing Visions of Human Rights" teaching unit, developed by Brown University's Choices Program, provides a…

  19. The Common Vision: Parenting and Educating for Wholeness.

    ERIC Educational Resources Information Center

    Marshak, David

    2003-01-01

    Presents a spiritually based view of needs and potentials of children and youth, from birth through age 21, based on works of Rudolf Steiner, Sri Aurobindo Ghose, and Hazrat Inayat Khan. Focuses on their common vision of the true nature of human beings, the course of human growth, and the desired functions of child rearing and education.…

  20. 'What' and 'where' in the human brain.

    PubMed

    Ungerleider, L G; Haxby, J V

    1994-04-01

    Multiple visual areas in the cortex of nonhuman primates are organized into two hierarchically organized and functionally specialized processing pathways, a 'ventral stream' for object vision and a 'dorsal stream' for spatial vision. Recent findings from positron emission tomography activation studies have localized these pathways within the human brain, yielding insights into cortical hierarchies, specialization of function, and attentional mechanisms.

  1. Development of a volumetric projection technique for the digital evaluation of field of view.

    PubMed

    Marshall, Russell; Summerskill, Stephen; Cook, Sharon

    2013-01-01

    Current regulations for field of view requirements in road vehicles are defined by 2D areas projected on the ground plane. This paper discusses the development of a new software-based volumetric field of view projection tool and its implementation within an existing digital human modelling system. In addition, the exploitation of this new tool is highlighted through its use in a UK Department for Transport funded research project exploring the current concerns with driver vision. Focusing specifically on rearwards visibility in small and medium passenger vehicles, the volumetric approach is shown to provide a number of distinct advantages. The ability to explore multiple projections of both direct vision (through windows) and indirect vision (through mirrors) provides a greater understanding of the field of view environment afforded to the driver whilst still maintaining compatibility with the 2D projections of the regulatory standards. Field of view requirements for drivers of road vehicles are defined by simplified 2D areas projected onto the ground plane. However, driver vision is a complex 3D problem. This paper presents the development of a new software-based 3D volumetric projection technique and its implementation in the evaluation of driver vision in small- and medium-sized passenger vehicles.

  2. Going Below Minimums: The Efficacy of Display Enhanced/Synthetic Vision Fusion for Go-Around Decisions during Non-Normal Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.

    2007-01-01

    The use of enhanced vision systems in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting approach and landing operations. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved enhanced flight vision system that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of synthetic vision systems and enhanced vision system technologies, focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under these newly adopted rules. Experimental results specific to flight crew response to non-normal events using the fused synthetic/enhanced vision system are presented.

  3. Virtual Reality System Offers a Wide Perspective

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Robot Systems Technology Branch engineers at Johnson Space Center created the remotely controlled Robonaut for use as an additional "set of hands" in extravehicular activities (EVAs) and to allow exploration of environments that would be too dangerous or difficult for humans. One of the problems Robonaut developers encountered was that the robot s interface offered an extremely limited field of vision. Johnson robotics engineer, Darby Magruder, explained that the 40-degree field-of-view (FOV) in initial robotic prototypes provided very narrow tunnel vision, which posed difficulties for Robonaut operators trying to see the robot s surroundings. Because of the narrow FOV, NASA decided to reach out to the private sector for assistance. In addition to a wider FOV, NASA also desired higher resolution in a head-mounted display (HMD) with the added ability to capture and display video.

  4. Novel Propulsion and Power Concepts for 21st Century Aviation

    NASA Technical Reports Server (NTRS)

    Sehra, Arun K.

    2003-01-01

    The air transportation for the new millennium will require revolutionary solutions to meeting public demand for improving safety, reliability, environmental compatibility, and affordability. NASA s vision for 21st Century Aircraft is to develop propulsion systems that are intelligent, virtually inaudible (outside the airport boundaries), and have near zero harmful emissions (CO2 and NO(x)). This vision includes intelligent engines that will be capable of adapting to changing internal and external conditions to optimally accomplish the mission with minimal human intervention. The distributed vectored propulsion will replace two to four wing mounted or fuselage mounted engines by a large number of small, mini, or micro engines. And the electric drive propulsion based on fuel cell power will generate electric power, which in turn will drive propulsors to produce the desired thrust. Such a system will completely eliminate the harmful emissions.

  5. Classification of Normal and Pathological Gait in Young Children Based on Foot Pressure Data.

    PubMed

    Guo, Guodong; Guffey, Keegan; Chen, Wenbin; Pergami, Paola

    2017-01-01

    Human gait recognition, an active research topic in computer vision, is generally based on data obtained from images/videos. We applied computer vision technology to classify pathology-related changes in gait in young children using a foot-pressure database collected using the GAITRite walkway system. As foot positioning changes with children's development, we also investigated the possibility of age estimation based on this data. Our results demonstrate that the data collected by the GAITRite system can be used for normal/pathological gait classification. Combining age information and normal/pathological gait classification increases the accuracy of the classifier. This novel approach could support the development of an accurate, real-time, and economic measure of gait abnormalities in children, able to provide important feedback to clinicians regarding the effect of rehabilitation interventions, and to support targeted treatment modifications.

  6. Dual-Use Space Technology Transfer Conference and Exhibition. Volume 1

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar (Compiler)

    1994-01-01

    This document contains papers presented at the Dual-Use Space Technology Transfer Conference and Exhibition held at the Johnson Space Center February 1-3, 1994. Possible technology transfers covered during the conference were in the areas of information access; innovative microwave and optical applications; materials and structures; marketing and barriers; intelligent systems; human factors and habitation; communications and data systems; business process and technology transfer; software engineering; biotechnology and advanced bioinstrumentation; communications signal processing and analysis; new ways of doing business; medical care; applications derived from control center data systems; human performance evaluation; technology transfer methods; mathematics, modeling, and simulation; propulsion; software analysis and decision tools systems/processes in human support technology; networks, control centers, and distributed systems; power; rapid development perception and vision technologies; integrated vehicle health management; automation technologies; advanced avionics; ans robotics technologies. More than 77 papers, 20 presentations, and 20 exhibits covering various disciplines were presented b experts from NASA, universities, and industry.

  7. VLSI chips for vision-based vehicle guidance

    NASA Astrophysics Data System (ADS)

    Masaki, Ichiro

    1994-02-01

    Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.

  8. A multi-stage color model revisited: implications for a gene therapy cure for red-green colorblindness.

    PubMed

    Mancuso, Katherine; Mauck, Matthew C; Kuchenbecker, James A; Neitz, Maureen; Neitz, Jay

    2010-01-01

    In 1993, DeValois and DeValois proposed a 'multi-stage color model' to explain how the cortex is ultimately able to deconfound the responses of neurons receiving input from three cone types in order to produce separate red-green and blue-yellow systems, as well as segregate luminance percepts (black-white) from color. This model extended the biological implementation of Hurvich and Jameson's Opponent-Process Theory of color vision, a two-stage model encompassing the three cone types combined in a later opponent organization, which has been the accepted dogma in color vision. DeValois' model attempts to satisfy the long-remaining question of how the visual system separates luminance information from color, but what are the cellular mechanisms that establish the complicated neural wiring and higher-order operations required by the Multi-stage Model? During the last decade and a half, results from molecular biology have shed new light on the evolution of primate color vision, thus constraining the possibilities for the visual circuits. The evolutionary constraints allow for an extension of DeValois' model that is more explicit about the biology of color vision circuitry, and it predicts that human red-green colorblindness can be cured using a retinal gene therapy approach to add the missing photopigment, without any additional changes to the post-synaptic circuitry.

  9. Connectionist model-based stereo vision for telerobotics

    NASA Technical Reports Server (NTRS)

    Hoff, William; Mathis, Donald

    1989-01-01

    Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.

  10. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  11. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  12. Sharpening vision by adapting to flicker

    PubMed Central

    Arnold, Derek H.; Williams, Jeremy D.; Phipps, Natasha E.; Goodale, Melvyn A.

    2016-01-01

    Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision—allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because “blur” signals are mitigated. PMID:27791115

  13. Exploration Architecture Options - ECLSS, EVA, TCS Implications

    NASA Technical Reports Server (NTRS)

    Chambliss, Joe; Henninger, Don; Lawrence, Carl

    2010-01-01

    Many options for exploration of space have been identified and evaluated since the Vision for Space Exploration (VSE) was announced in 2004. Lunar architectures have been identified and addressed in the Lunar Surface Systems team to establish options for how to get to and then inhabit and explore the moon. The Augustine Commission evaluated human space flight for the Obama administration and identified many options for how to conduct human spaceflight in the future. This paper will evaluate the options for exploration of space for the implications of architectures on the Environmental Control and Life Support (ECLSS), ExtraVehicular Activity (EVA) and Thermal Control System (TCS) Systems. The advantages and disadvantages of each architecture and options are presented.

  14. Program Review Report on the College of Dentistry, University of Florida.

    ERIC Educational Resources Information Center

    Christiansen, Richard

    The programs offered by the College of Dentistry at the University of Florida (UF) were reviewed by an outside consultant in order to provide information on the State University System's vision of the college and its mission for Florida, the support base for the program, and current directions and anticipated fiscal and human forces that help…

  15. Visions Lost and Dreams Forgotten: Environmental Education, Systems Thinking, and Possible Futures in American Public Schools

    ERIC Educational Resources Information Center

    Cassell, John A.; Nelson, Thomas

    2010-01-01

    This article discusses the contributions to this special issue of "Teacher Education Quarterly." At the core of all the contributions is the compellingly urgent realization that humanity is facing, and must deal with, enormous ecological and social problems and challenges. This situation has created an urgent and compelling need centered…

  16. Modelling Subjectivity in Visual Perception of Orientation for Image Retrieval.

    ERIC Educational Resources Information Center

    Sanchez, D.; Chamorro-Martinez, J.; Vila, M. A.

    2003-01-01

    Discussion of multimedia libraries and the need for storage, indexing, and retrieval techniques focuses on the combination of computer vision and data mining techniques to model high-level concepts for image retrieval based on perceptual features of the human visual system. Uses fuzzy set theory to measure users' assessments and to capture users'…

  17. Human microbiome science: vision for the future, Bethesda, MD, July 24 to 26, 2013

    PubMed Central

    2014-01-01

    A conference entitled ‘Human microbiome science: Vision for the future’ was organized in Bethesda, MD from July 24 to 26, 2013. The event brought together experts in the field of human microbiome research and aimed at providing a comprehensive overview of the state of microbiome research, but more importantly to identify and discuss gaps, challenges and opportunities in this nascent field. This report summarizes the presentations but also describes what is needed for human microbiome research to move forward and deliver medical translational applications.

  18. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  19. Expedient range enhanced 3-D robot colour vision

    NASA Astrophysics Data System (ADS)

    Jarvis, R. A.

    1983-01-01

    Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.

  20. Humans and machines in space: The vision, the challenge, the payoff; AAS Goddard Memorial Symposium, 29th, Washington, DC, March 14-15, 1991

    NASA Astrophysics Data System (ADS)

    Johnson, Bradley; May, Gayle L.; Korn, Paula

    A recent symposium produced papers in the areas of solar system exploration, man machine interfaces, cybernetics, virtual reality, telerobotics, life support systems and the scientific and technology spinoff from the NASA space program. A number of papers also addressed the social and economic impacts of the space program. For individual titles, see A95-87468 through A95-87479.

  1. Theoretical aspects of color vision

    NASA Technical Reports Server (NTRS)

    Wolbarsht, M. L.

    1972-01-01

    The three color receptors of Young-Helmholtz and the opponent colors type of information processing postulated by Hering are both present in the human visual system. This mixture accounts for both the phenomena of color matching or hue discrimination and such perceptual qualities of color as the division of the spectrum into color bands. The functioning of the cells in the visual system, especially within the retina, and the relation of this function to color perception are discussed.

  2. NASA Project Constellation Systems Engineering Approach

    NASA Technical Reports Server (NTRS)

    Dumbacher, Daniel L.

    2005-01-01

    NASA's Office of Exploration Systems (OExS) is organized to empower the Vision for Space Exploration with transportation systems that result in achievable, affordable, and sustainable human and robotic journeys to the Moon, Mars, and beyond. In the process of delivering these capabilities, the systems engineering function is key to implementing policies, managing mission requirements, and ensuring technical integration and verification of hardware and support systems in a timely, cost-effective manner. The OExS Development Programs Division includes three main areas: (1) human and robotic technology, (2) Project Prometheus for nuclear propulsion development, and (3) Constellation Systems for space transportation systems development, including a Crew Exploration Vehicle (CEV). Constellation Systems include Earth-to-orbit, in-space, and surface transportation systems; maintenance and science instrumentation; and robotic investigators and assistants. In parallel with development of the CEV, robotic explorers will serve as trailblazers to reduce the risk and costs of future human operations on the Moon, as well as missions to other destinations, including Mars. Additional information is included in the original extended abstract.

  3. A Vision for the Exploration of Mars: Robotic Precursors Followed by Humans to Mars Orbit in 2033

    NASA Technical Reports Server (NTRS)

    Sellers, Piers J.; Garvin, James B.; Kinney, Anne L.; Amato, Michael J.; White, Nicholas E.

    2012-01-01

    The reformulation of the Mars program gives NASA a rare opportunity to deliver a credible vision in which humans, robots, and advancements in information technology combine to open the deep space frontier to Mars. There is a broad challenge in the reformulation of the Mars exploration program that truly sets the stage for: 'a strategic collaboration between the Science Mission Directorate (SMD), the Human Exploration and Operations Mission Directorate (HEOMD) and the Office of the Chief Technologist, for the next several decades of exploring Mars'.Any strategy that links all three challenge areas listed into a true long term strategic program necessitates discussion. NASA's SMD and HEOMD should accept the President's challenge and vision by developing an integrated program that will enable a human expedition to Mars orbit in 2033 with the goal of returning samples suitable for addressing the question of whether life exists or ever existed on Mars

  4. Human Systems Integration (HSI) Case Studies from the NASA Constellation Program

    NASA Technical Reports Server (NTRS)

    Baggerman, Susan; Berdich, Debbie; Whitmore, Mihriban

    2009-01-01

    The National Aeronautics and Space Administration (NASA) Constellation Program is responsible for planning and implementing those programs necessary to send human explorers back to the moon, onward to Mars and other destinations in the solar system, and to support missions to the International Space Station. The Constellation Program has the technical management responsibility for all Constellation Projects, including both human rated and non-human rated vehicles such as the Crew Exploration Vehicle, EVA Systems, the Lunar Lander, Lunar Surface Systems, and the Ares I and Ares V rockets. With NASA s new Vision for Space Exploration to send humans beyond Earth orbit, it is critical to consider the human as a system that demands early and continuous user involvement, inclusion in trade offs and analyses, and an iterative "prototype/test/ redesign" process. Personnel at the NASA Johnson Space Center are involved in the Constellation Program at both the Program and Project levels as human system integrators. They ensure that the human is considered as a system, equal to hardware and software vehicle systems. Systems to deliver and support extended human habitation on the moon are extremely complex and unique, presenting new opportunities to employ Human Systems Integration, or HSI practices in the Constellation Program. The purpose of the paper is to show examples of where human systems integration work is successfully employed in the Constellation Program and related Projects, such as in the areas of habitation and early requirements and design concepts.

  5. Review of technological advancements in calibration systems for laser vision correction

    NASA Astrophysics Data System (ADS)

    Arba-Mosquera, Samuel; Vinciguerra, Paolo; Verma, Shwetabh

    2018-02-01

    Using PubMed and our internal database, we extensively reviewed the literature on the technological advancements in calibration systems, with a motive to present an account of the development history, and latest developments in calibration systems used in refractive surgery laser systems. As a second motive, we explored the clinical impact of the error introduced due to the roughness in ablation and its corresponding effect on system calibration. The inclusion criterion for this review was strict relevance to the clinical questions under research. The existing calibration methods, including various plastic models, are highly affected by various factors involved in refractive surgery, such as temperature, airflow, and hydration. Surface roughness plays an important role in accurate measurement of ablation performance on calibration materials. The ratio of ablation efficiency between the human cornea and calibration material is very critical and highly dependent on the laser beam characteristics and test conditions. Objective evaluation of the calibration data and corresponding adjustment of the laser systems at regular intervals are essential for the continuing success and further improvements in outcomes of laser vision correction procedures.

  6. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  7. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  8. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  9. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  10. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  11. Visions for Space Exploration: ILS Issues and Approaches

    NASA Technical Reports Server (NTRS)

    Watson, Kevin

    2005-01-01

    This viewgraph presentation reviews some of the logistic issues that the Vision for Space Exploration will entail. There is a review of the vision and the timeline for the return to the moon that will lead to the first human exploration of Mars. The lessons learned from the International Space Station (ISS) and other such missions are also reviewed.

  12. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  13. Flight Test Evaluation of Situation Awareness Benefits of Integrated Synthetic Vision System Technology f or Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III

    2005-01-01

    Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.

  14. New experimental diffractive-optical data on E.Land's Retinex mechanism in human color vision: Part II

    NASA Astrophysics Data System (ADS)

    Lauinger, N.

    2007-09-01

    A better understanding of the color constancy mechanism in human color vision [7] can be reached through analyses of photometric data of all illuminants and patches (Mondrians or other visible objects) involved in visual experiments. In Part I [3] and in [4, 5 and 6] the integration in the human eye of the geometrical-optical imaging hardware and the diffractive-optical hardware has been described and illustrated (Fig.1). This combined hardware represents the main topic of the NAMIROS research project (nano- and micro- 3D gratings for optical sensors) [8] promoted and coordinated by Corrsys 3D Sensors AG. The hardware relevant to (photopic) human color vision can be described as a diffractive or interference-optical correlator transforming incident light into diffractive-optical RGB data and relating local RGB onto global RGB data in the near-field behind the 'inverted' human retina. The relative differences at local/global RGB interference-optical contrasts are available to photoreceptors (cones and rods) only after this optical pre-processing.

  15. Paintings, photographs, and computer graphics are calculated appearances

    NASA Astrophysics Data System (ADS)

    McCann, John

    2012-03-01

    Painters reproduce the appearances they see, or visualize. The entire human visual system is the first part of that process, providing extensive spatial processing. Painters have used spatial techniques since the Renaissance to render HDR scenes. Silver halide photography responds to the light falling on single film pixels. Film can only mimic the retinal response of the cones at the start of the visual process. Film cannot mimic the spatial processing in humans. Digital image processing can. This talk studies three dramatic visual illusions and uses the spatial mechanisms found in human vision to interpret their appearances.

  16. An operator interface design for a telerobotic inspection system

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Tso, Kam S.; Hayati, Samad

    1993-01-01

    The operator interface has recently emerged as an important element for efficient and safe interactions between human operators and telerobotics. Advances in graphical user interface and graphics technologies enable us to produce very efficient operator interface designs. This paper describes an efficient graphical operator interface design newly developed for remote surface inspection at NASA-JPL. The interface, designed so that remote surface inspection can be performed by a single operator with an integrated robot control and image inspection capability, supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  17. Texton Theory Of Two-Dimensional And Three-Dimensional Vision

    NASA Astrophysics Data System (ADS)

    Julesz, Bela

    1983-04-01

    Recently, after two decades of research in visual texture discrimination and in the discrimination of briefly presented patterns, a new structure of human vision has emerged. Accordingly, human vision is mediated by two separate visual systems: a preattentive and an attentive one. The parallel preattentive visual system cannot process complex forms, yet can, almost instantaneously, without effort or scrutiny, detect differences in a few local conspicuous features, regardless of the number of elements or where they occur. These "textons" (Julesz, 1981) are elongated blobs (e.g. rectangles, ellipses, or line segments) of specific color, angular orientation, width, length, binocular and movement disparity, and flicker rate. For line segments, their terminators (ends-of-lines) and crossings often behave as textons. The preattentive system can count only the number of textons and detect their locale, but cannot perceive the positional relations between them (Julesz, 1981, 1981a). However, the preattentive system can direct focal attention to the areas where differences in texton density occur in about 50 msec steps (four times faster than eye-movements). Only in this narrow aperture of focal attention (which can be as small as a few minutes of arc) can positional differences be perceived, permitting the prodigious feats of form recognition (Bergen and Julesz, 1981). Since disparity differences in random-dot stereograms can be perceived in a 50 msec presentation (followed by masking) provided the binocular disparity is beyond a critical limit, disparity is also a texton. It can be shown (for review, see Julesz and Schumer, 1981) that the largest disparity that can yield stereopsis in random-dot stereograms is a monotonic function of the cyclopean target area. Furthermore, the cyclopean modulation function (for corrugated sinusoidal depth gratings) is very "myopic" with a cit-off around 3 c/deg. Since stereopsis is important in quality inspection, an objective determination of stereoscopic abilities using evoked potential techniques to dynamic random-dot correlograms and stereograms is also discussed (Julesz, Kropfl and Petrig, 1980; Julesz and Kropfl, 1982). These techniques also permit the early diagnosis of stereo-blindness that afflicts 2% of the human population and might lead to its prevention.

  18. TU-FG-201-04: Computer Vision in Autonomous Quality Assurance of Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, H; Jenkins, C; Yu, S

    Purpose: Routine quality assurance (QA) of linear accelerators represents a critical and costly element of a radiation oncology center. Recently, a system was developed to autonomously perform routine quality assurance on linear accelerators. The purpose of this work is to extend this system and contribute computer vision techniques for obtaining quantitative measurements for a monthly multi-leaf collimator (MLC) QA test specified by TG-142, namely leaf position accuracy, and demonstrate extensibility for additional routines. Methods: Grayscale images of a picket fence delivery on a radioluminescent phosphor coated phantom are captured using a CMOS camera. Collected images are processed to correct formore » camera distortions, rotation and alignment, reduce noise, and enhance contrast. The location of each MLC leaf is determined through logistic fitting and a priori modeling based on knowledge of the delivered beams. Using the data collected and the criteria from TG-142, a decision is made on whether or not the leaf position accuracy of the MLC passes or fails. Results: The locations of all MLC leaf edges are found for three different picket fence images in a picket fence routine to 0.1mm/1pixel precision. The program to correct for image alignment and determination of leaf positions requires a runtime of 21– 25 seconds for a single picket, and 44 – 46 seconds for a group of three pickets on a standard workstation CPU, 2.2 GHz Intel Core i7. Conclusion: MLC leaf edges were successfully found using techniques in computer vision. With the addition of computer vision techniques to the previously described autonomous QA system, the system is able to quickly perform complete QA routines with minimal human contribution.« less

  19. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    PubMed Central

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  20. MMW radar enhanced vision systems: the Helicopter Autonomous Landing System (HALS) and Radar-Enhanced Vision System (REVS) are rotary and fixed wing enhanced flight vision systems that enable safe flight operations in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Cross, Jack; Schneider, John; Cariani, Pete

    2013-05-01

    Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.

  1. Imaginable Technologies for Human Missions to Mars

    NASA Technical Reports Server (NTRS)

    Bushnell, Dennis M.

    2007-01-01

    The thesis of the present discussion is that the simultaneous cost and inherent safety issues of human on-site exploration of Mars will require advanced-to-revolutionary technologies. The major crew safety issues as currently identified include reduced gravity, radiation, potentially extremely toxic dust and the requisite reliability for years-long missions. Additionally, this discussion examines various technological areas which could significantly impact Human-Mars cost and safety. Cost reductions for space access is a major metric, including approaches to significantly reduce the overall up-mass. Besides fuel, propulsion and power systems, the up-mass consists of the infrastructure and supplies required to keep humans healthy and the equipment for executing exploration mission tasks. Hence, the major technological areas of interest for potential cost reductions include propulsion, in-space and on-planet power, life support systems, materials and overall architecture, systems, and systems-of-systems approaches. This discussion is specifically offered in response to and as a contribution to goal 3 of the Presidential Exploration Vision: "Develop the Innovative Technologies Knowledge and Infrastructures both to explore and to support decisions about the destinations for human exploration".

  2. Evaluation of 5 different labeled polymer immunohistochemical detection systems.

    PubMed

    Skaland, Ivar; Nordhus, Marit; Gudlaugsson, Einar; Klos, Jan; Kjellevold, Kjell H; Janssen, Emiel A M; Baak, Jan P A

    2010-01-01

    Immunohistochemical staining is important for diagnosis and therapeutic decision making but the results may vary when different detection systems are used. To analyze this, 5 different labeled polymer immunohistochemical detection systems, REAL EnVision, EnVision Flex, EnVision Flex+ (Dako, Glostrup, Denmark), NovoLink (Novocastra Laboratories Ltd, Newcastle Upon Tyne, UK) and UltraVision ONE (Thermo Fisher Scientific, Fremont, CA) were tested using 12 different, widely used mouse and rabbit primary antibodies, detecting nuclear, cytoplasmic, and membrane antigens. Serial sections of multitissue blocks containing 4% formaldehyde fixed paraffin embedded material were selected for their weak, moderate, and strong staining for each antibody. Specificity and sensitivity were evaluated by subjective scoring and digital image analysis. At optimal primary antibody dilution, digital image analysis showed that EnVision Flex+ was the most sensitive system (P < 0.005), with means of 8.3, 13.4, 20.2, and 41.8 gray scale values stronger staining than REAL EnVision, EnVision Flex, NovoLink, and UltraVision ONE, respectively. NovoLink was the second most sensitive system for mouse antibodies, but showed low sensitivity for rabbit antibodies. Due to low sensitivity, 2 cases with UltraVision ONE and 1 case with NovoLink stained false negatively. None of the detection systems showed any distinct false positivity, but UltraVision ONE and NovoLink consistently showed weak background staining both in negative controls and at optimal primary antibody dilution. We conclude that there are significant differences in sensitivity, specificity, costs, and total assay time in the immunohistochemical detection systems currently in use.

  3. Real Time Target Tracking Using Dedicated Vision Hardware

    NASA Astrophysics Data System (ADS)

    Kambies, Keith; Walsh, Peter

    1988-03-01

    This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.

  4. Pyramidal neurovision architecture for vision machines

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1993-08-01

    The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.

  5. A real time study of the human equilibrium using an instrumented insole with 3 pressure sensors.

    PubMed

    Abou Ghaida, Hussein; Mottet, Serge; Goujon, Jean-Marc

    2014-01-01

    The present work deals with the study of the human equilibrium using an ambulatory e-health system. One of the point on which we focus is the fall risk, when losing equilibrium control. A specific postural learning model is presented, and an ambulatory instrumented insole is developed using 3 pressures sensors per foot, in order to determine the real-time displacement and the velocity of the centre of pressure (CoP). The increase of these parameters signals a loss of physiological sensation, usually of vision or of the inner ear. The results are compared to those obtained from classical more complex systems.

  6. Human-rating Automated and Robotic Systems - (How HAL Can Work Safely with Astronauts)

    NASA Technical Reports Server (NTRS)

    Baroff, Lynn; Dischinger, Charlie; Fitts, David

    2009-01-01

    Long duration human space missions, as planned in the Vision for Space Exploration, will not be possible without applying unprecedented levels of automation to support the human endeavors. The automated and robotic systems must carry the load of routine housekeeping for the new generation of explorers, as well as assist their exploration science and engineering work with new precision. Fortunately, the state of automated and robotic systems is sophisticated and sturdy enough to do this work - but the systems themselves have never been human-rated as all other NASA physical systems used in human space flight have. Our intent in this paper is to provide perspective on requirements and architecture for the interfaces and interactions between human beings and the astonishing array of automated systems; and the approach we believe necessary to create human-rated systems and implement them in the space program. We will explain our proposed standard structure for automation and robotic systems, and the process by which we will develop and implement that standard as an addition to NASA s Human Rating requirements. Our work here is based on real experience with both human system and robotic system designs; for surface operations as well as for in-flight monitoring and control; and on the necessities we have discovered for human-systems integration in NASA's Constellation program. We hope this will be an invitation to dialog and to consideration of a new issue facing new generations of explorers and their outfitters.

  7. New technologies lead to a new frontier: cognitive multiple data representation

    NASA Astrophysics Data System (ADS)

    Buffat, S.; Liege, F.; Plantier, J.; Roumes, C.

    2005-05-01

    The increasing number and complexity of operational sensors (radar, infrared, hyperspectral...) and availability of huge amount of data, lead to more and more sophisticated information presentations. But one key element of the IMINT line cannot be improved beyond initial system specification: the operator.... In order to overcome this issue, we have to better understand human visual object representation. Object recognition theories in human vision balance between matching 2D templates representation with viewpoint-dependant information, and a viewpoint-invariant system based on structural description. Spatial frequency content is relevant due to early vision filtering. Orientation in depth is an important variable to challenge object constancy. Three objects, seen from three different points of view in a natural environment made the original images in this study. Test images were a combination of spatial frequency filtered original images and an additive contrast level of white noise. In the first experiment, the observer's task was a same versus different forced choice with spatial alternative. Test images had the same noise level in a presentation row. Discrimination threshold was determined by modifying the white noise contrast level by means of an adaptative method. In the second experiment, a repetition blindness paradigm was used to further investigate the viewpoint effect on object recognition. The results shed some light on the human visual system processing of objects displayed under different physical descriptions. This is an important achievement because targets which not always match physical properties of usual visual stimuli can increase operational workload.

  8. Lockheed Martin Response to the OSP Challenge

    NASA Technical Reports Server (NTRS)

    Sullivan, Robert T.; Munkres, Randy; Megna, Thomas D.; Beckham, Joanne

    2003-01-01

    The Lockheed Martin Orbital Space Plane System provides crew transfer and rescue for the International Space Station more safely and affordably than current human space transportation systems. Through planned upgrades and spiral development, it is also capable of satisfying the Nation's evolving space transportation requirements and enabling the national vision for human space flight. The OSP System, formulated through rigorous requirements definition and decomposition, consists of spacecraft and launch vehicle flight elements, ground processing facilities and existing transportation, launch complex, range, mission control, weather, navigation, communication and tracking infrastructure. The concept of operations, including procurement, mission planning, launch preparation, launch and mission operations and vehicle maintenance, repair and turnaround, is structured to maximize flexibility and mission availability and minimize program life cycle cost. The approach to human rating and crew safety utilizes simplicity, performance margin, redundancy, abort modes and escape modes to mitigate credible hazards that cannot be designed out of the system.

  9. Computer vision challenges and technologies for agile manufacturing

    NASA Astrophysics Data System (ADS)

    Molley, Perry A.

    1996-02-01

    Sandia National Laboratories, a Department of Energy laboratory, is responsible for maintaining the safety, security, reliability, and availability of the nuclear weapons stockpile for the United States. Because of the changing national and global political climates and inevitable budget cuts, Sandia is changing the methods and processes it has traditionally used in the product realization cycle for weapon components. Because of the increasing age of the nuclear stockpile, it is certain that the reliability of these weapons will degrade with time unless eventual action is taken to repair, requalify, or renew them. Furthermore, due to the downsizing of the DOE weapons production sites and loss of technical personnel, the new product realization process is being focused on developing and deploying advanced automation technologies in order to maintain the capability for producing new components. The goal of Sandia's technology development program is to create a product realization environment that is cost effective, has improved quality and reduced cycle time for small lot sizes. The new environment will rely less on the expertise of humans and more on intelligent systems and automation to perform the production processes. The systems will be robust in order to provide maximum flexibility and responsiveness for rapidly changing component or product mixes. An integrated enterprise will allow ready access to and use of information for effective and efficient product and process design. Concurrent engineering methods will allow a speedup of the product realization cycle, reduce costs, and dramatically lessen the dependency on creating and testing physical prototypes. Virtual manufacturing will allow production processes to be designed, integrated, and programed off-line before a piece of hardware ever moves. The overriding goal is to be able to build a large variety of new weapons parts on short notice. Many of these technologies that are being developed are also applicable to commercial production processes and applications. Computer vision will play a critical role in the new agile production environment for automation of processes such as inspection, assembly, welding, material dispensing and other process control tasks. Although there are many academic and commercial solutions that have been developed, none have had widespread adoption considering the huge potential number of applications that could benefit from this technology. The reason for this slow adoption is that the advantages of computer vision for automation can be a double-edged sword. The benefits can be lost if the vision system requires an inordinate amount of time for reprogramming by a skilled operator to account for different parts, changes in lighting conditions, background clutter, changes in optics, etc. Commercially available solutions typically require an operator to manually program the vision system with features used for the recognition. In a recent survey, we asked a number of commercial manufacturers and machine vision companies the question, 'What prevents machine vision systems from being more useful in factories?' The number one (and unanimous) response was that vision systems require too much skill to set up and program to be cost effective.

  10. Technical Note: A respiratory monitoring and processing system based on computer vision: prototype and proof of principle

    PubMed Central

    Atallah, Vincent; Escarmant, Patrick; Vinh‐Hung, Vincent

    2016-01-01

    Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in‐house‐made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real‐time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high‐contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep‐breathing patterns. This low‐cost, computer‐vision system for real‐time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion. PACS number(s): 87.55.km PMID:27685116

  11. Technical Note: A respiratory monitoring and processing system based on computer vision: prototype and proof of principle.

    PubMed

    Leduc, Nicolas; Atallah, Vincent; Escarmant, Patrick; Vinh-Hung, Vincent

    2016-09-08

    Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in-house-made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real-time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high-contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep-breathing patterns. This low-cost, computer-vision system for real-time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion.v. © 2016 The Authors.

  12. Focal damage to macaque photoreceptors produces persistent visual loss

    PubMed Central

    Strazzeri, Jennifer M.; Hunter, Jennifer J.; Masella, Benjamin D.; Yin, Lu; Fischer, William S.; DiLoreto, David A.; Libby, Richard T.; Williams, David R.; Merigan, William H.

    2014-01-01

    Insertion of light-gated channels into inner retina neurons restores neural light responses, light evoked potentials, visual optomotor responses and visually-guided maze behavior in mice blinded by retinal degeneration. This method of vision restoration bypasses damaged outer retina, providing stimulation directly to retinal ganglion cells in inner retina. The approach is similar to that of electronic visual protheses, but may offer some advantages, such as avoidance of complex surgery and direct targeting of many thousands of neurons. However, the promise of this technique for restoring human vision remains uncertain because rodent animal models, in which it has been largely developed, are not ideal for evaluating visual perception. On the other hand, psychophysical vision studies in macaque can be used to evaluate different approaches to vision restoration in humans. Furthermore, it has not been possible to test vision restoration in macaques, the optimal model for human-like vision, because there has been no macaque model of outer retina degeneration. In this study, we describe development of a macaque model of photoreceptor degeneration that can in future studies be used to test restoration of perception by visual prostheses. Our results show that perceptual deficits caused by focal light damage are restricted to locations at which photoreceptors are damaged, that optical coherence tomography (OCT) can be used to track such lesions, and that adaptive optics retinal imaging, which we recently used for in vivo recording of ganglion cell function, can be used in future studies to examine these lesions. PMID:24316158

  13. Eye movement-invariant representations in the human visual system.

    PubMed

    Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L

    2017-01-01

    During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.

  14. The evolution of the complex sensory and motor systems of the human brain.

    PubMed

    Kaas, Jon H

    2008-03-18

    Inferences about how the complex sensory and motor systems of the human brain evolved are based on the results of comparative studies of brain organization across a range of mammalian species, and evidence from the endocasts of fossil skulls of key extinct species. The endocasts of the skulls of early mammals indicate that they had small brains with little neocortex. Evidence from comparative studies of cortical organization from small-brained mammals of the six major branches of mammalian evolution supports the conclusion that the small neocortex of early mammals was divided into roughly 20-25 cortical areas, including primary and secondary sensory fields. In early primates, vision was the dominant sense, and cortical areas associated with vision in temporal and occipital cortex underwent a significant expansion. Comparative studies indicate that early primates had 10 or more visual areas, and somatosensory areas with expanded representations of the forepaw. Posterior parietal cortex was also expanded, with a caudal half dominated by visual inputs, and a rostral half dominated by somatosensory inputs with outputs to an array of seven or more motor and visuomotor areas of the frontal lobe. Somatosensory areas and posterior parietal cortex became further differentiated in early anthropoid primates. As larger brains evolved in early apes and in our hominin ancestors, the number of cortical areas increased to reach an estimated 200 or so in present day humans, and hemispheric specializations emerged. The large human brain grew primarily by increasing neuron number rather than increasing average neuron size.

  15. Atoms of recognition in human and computer vision.

    PubMed

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-08

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  16. A cognitive approach to vision for a mobile robot

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Funk, Christopher; Lyons, Damian

    2013-05-01

    We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.

  17. Proactive learning for artificial cognitive systems

    NASA Astrophysics Data System (ADS)

    Lee, Soo-Young

    2010-04-01

    The Artificial Cognitive Systems (ACS) will be developed for human-like functions such as vision, auditory, inference, and behavior. Especially, computational models and artificial HW/SW systems will be devised for Proactive Learning (PL) and Self-Identity (SI). The PL model provides bilateral interactions between robot and unknown environment (people, other robots, cyberspace). For the situation awareness in unknown environment it is required to receive audiovisual signals and to accumulate knowledge. If the knowledge is not enough, the PL should improve by itself though internet and others. For human-oriented decision making it is also required for the robot to have self-identify and emotion. Finally, the developed models and system will be mounted on a robot for the human-robot co-existing society. The developed ACS will be tested against the new Turing Test for the situation awareness. The Test problems will consist of several video clips, and the performance of the ACSs will be compared against those of human with several levels of cognitive ability.

  18. Science Opportunities Enabled by NASA's Constellation System: Interim Report

    NASA Technical Reports Server (NTRS)

    2008-01-01

    In 2004 NASA initiated studies of advanced science mission concepts known as the Vision Missions and inspired by a series of NASA roadmap activities conducted in 2003. Also in 2004 NASA began implementation of the first phases of a new space exploration policy, the Vision for Space Exploration. This implementation effort included development of a new human-carrying spacecraft, known as Orion, and two new launch vehicles, the Ares I and Ares V rockets.collectively called the Constellation System. NASA asked the National Research Council (NRC) to evaluate the science opportunities enabled by the Constellation System (see Preface) and to produce an interim report on a short time schedule and a final report by November 2008. The committee notes, however, that the Constellation System and its Orion and Ares vehicles have been justified by NASA and selected in order to enable human exploration beyond low Earth orbit, and not to enable science missions. This interim report of the Committee on Science Opportunities Enabled by NASA s Constellation System evaluates the 11 Vision Mission studies presented to it and groups them into two categories: those more deserving of future study, and those less deserving of future study. Although its statement of task also refers to Earth science missions, the committee points out that the Vision Missions effort was focused on future astronomy, heliophysics, and planetary exploration and did not include any Earth science studies because, at the time, the NRC was conducting the first Earth science decadal survey, and funding Earth science studies as part of the Vision Missions effort would have interfered with that process. Consequently, no Earth science missions are evaluated in this interim report. However, the committee will evaluate any Earth science mission proposal submitted in response to its request for information issued in March 2008 (see Appendix A). The committee based its evaluation of the preexisting Vision Missions studies on two criteria: whether the concepts offered the potential for a significant scientific advance, and whether or not the concepts would benefit from the Constellation System. The committee determined that all of the concepts offered the possibility of a significant scientific advance, but it cautions that such an evaluation ultimately must be made by the decadal survey process, and it emphasizes that this interim report s evaluation should not be considered to be an endorsement of the scientific merit of these proposals, which must of course be evaluated relative to other proposals. The committee determined that seven of these concepts would benefit from the Constellation System, whereas four would not, but it stresses that this conclusion does not reflect an evaluation of the scientific merit of the projects, but rather an assessment of whether or not new capabilities provided by the Constellation System could significantly affect them. Some of the mission concepts, such as the Advanced Compton Telescope, already offer a significant scientific advance and fit easily within the mass and volume constraints of existing launch vehicles. Other mission concepts, such as the Palmer Quest proposal to drill through the Mars polar cap, are not constrained by the launch vehicle, but rather by other technology limitations. The committee evaluated the mission concepts as presented to it, aware nevertheless that proposing a far larger and more ambitious mission with the same science goals might be possible given the capabilities of the Ares V launch vehicle. (Such proposals can be submitted in response to the committee s request for information to be evaluated in its final report.) See Table S.1 for a summary of the Vision Missions, including their cost estimates, technical maturity, and reasons that they might benefit from the Constellation System. The committee developed several findings and recommendations.

  19. To See Anew: New Technologies Are Moving Rapidly Toward Restoring or Enabling Vision in the Blind.

    PubMed

    Grifantini, Kristina

    2017-01-01

    Humans have been using technology to improve their vision for many decades. Eyeglasses, contact lenses, and, more recently, laser-based surgeries are commonly employed to remedy vision problems, both minor and major. But options are far fewer for those who have not seen since birth or who have reached stages of blindness in later life.

  20. Frontiers in Human Information Processing Conference

    DTIC Science & Technology

    2008-02-25

    Frontiers in Human Information Processing - Vision, Attention , Memory , and Applications: A Tribute to George Sperling, a Festschrift. We are grateful...with focus on the formal, computational, and mathematical approaches that unify the areas of vision, attention , and memory . The conference also...Information Processing Conference Final Report AFOSR GRANT # FA9550-07-1-0346 The AFOSR Grant # FA9550-07-1-0346 provided partial support for the Conference

  1. Both vision-for-perception and vision-for-action follow Weber's law at small object sizes, but violate it at larger sizes.

    PubMed

    Bruno, Nicola; Uccelli, Stefano; Viviani, Eva; de'Sperati, Claudio

    2016-10-01

    According to a previous report, the visual coding of size does not obey Weber's law when aimed at guiding a grasp (Ganel et al., 2008a). This result has been interpreted as evidence for a fundamental difference between sensory processing in vision-for-perception, which needs to compress a wide range of physical objects to a restricted range of percepts, and vision-for-action when applied to the much narrower range of graspable and reachable objects. We compared finger aperture in a motor task (precision grip) and perceptual task (cross modal matching or "manual estimation" of the object's size). Crucially, we tested the whole range of graspable objects. We report that both grips and estimations clearly violate Weber's law with medium-to-large objects, but are essentially consistent with Weber's law with smaller objects. These results differ from previous characterizations of perception-action dissociations in the precision of representations of object size. Implications for current functional interpretations of the dorsal and ventral processing streams in the human visual system are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Universal design of a microcontroller and IoT system to detect the heart rate

    NASA Astrophysics Data System (ADS)

    Uwamahoro, Raphael; Mushikiwabeza, Alexie; Minani, Gerard; Mohan Murari, Bhaskar

    2017-11-01

    Heart rate analysis provides vital information of the present condition of the human body. It helps medical professionals in diagnosis of various malfunctions of the body. The limitation of vision impaired and blind people to access medical devices cause a considerable loss of life. In this paper, we intended to develop a heart rate detection system that is usable for people with normal and abnormal vision. The system is based on a non-invasive method of measuring the variation of the tissue blood flow rate by means of a photo transmitter and detector through fingertip known as photoplethysmography (PPG). The signal detected is firstly passed through active low pass filter and then amplified by a two stages high gain amplifier. The amplified signal is feed into the microcontroller to calculate the heart rate and displays the heart beat via sound systems and Liquid Crystal Display (LCD). To distinguish arrhythmia, normal heart rate and abnormal working conditions of the system, recognition is provided in different sounds, LCD readings and Light Emitting Diodes (LED).

  3. Bringing UAVs to the fight: recent army autonomy research and a vision for the future

    NASA Astrophysics Data System (ADS)

    Moorthy, Jay; Higgins, Raymond; Arthur, Keith

    2008-04-01

    The Unmanned Autonomous Collaborative Operations (UACO) program was initiated in recognition of the high operational burden associated with utilizing unmanned systems by both mounted and dismounted, ground and airborne warfighters. The program was previously introduced at the 62nd Annual Forum of the American Helicopter Society in May of 20061. This paper presents the three technical approaches taken and results obtained in UACO. All three approaches were validated extensively in contractor simulations, two were validated in government simulation, one was flight tested outside the UACO program, and one was flight tested in Part 2 of UACO. Results and recommendations are discussed regarding diverse areas such as user training and human-machine interface, workload distribution, UAV flight safety, data link bandwidth, user interface constructs, adaptive algorithms, air vehicle system integration, and target recognition. Finally, a vision for UAV As A Wingman is presented.

  4. Efficient image enhancement using sparse source separation in the Retinex theory

    NASA Astrophysics Data System (ADS)

    Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik

    2017-11-01

    Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.

  5. Crossing the quality chasm in resource-limited settings.

    PubMed

    Maru, Duncan Smith-Rohrberg; Andrews, Jason; Schwarz, Dan; Schwarz, Ryan; Acharya, Bibhav; Ramaiya, Astha; Karelas, Gregory; Rajbhandari, Ruma; Mate, Kedar; Shilpakar, Sona

    2012-11-30

    Over the last decade, extensive scientific and policy innovations have begun to reduce the "quality chasm"--the gulf between best practices and actual implementation that exists in resource-rich medical settings. While limited data exist, this chasm is likely to be equally acute and deadly in resource-limited areas. While health systems have begun to be scaled up in impoverished areas, scale-up is just the foundation necessary to deliver effective healthcare to the poor. This perspective piece describes a vision for a global quality improvement movement in resource-limited areas. The following action items are a first step toward achieving this vision: 1) revise global health investment mechanisms to value quality; 2) enhance human resources for improving health systems quality; 3) scale up data capacity; 4) deepen community accountability and engagement initiatives; 5) implement evidence-based quality improvement programs; 6) develop an implementation science research agenda.

  6. How do plants see the world? - UV imaging with a TiO2 nanowire array by artificial photosynthesis.

    PubMed

    Kang, Ji-Hoon; Leportier, Thibault; Park, Min-Chul; Han, Sung Gyu; Song, Jin-Dong; Ju, Hyunsu; Hwang, Yun Jeong; Ju, Byeong-Kwon; Poon, Ting-Chung

    2018-05-10

    The concept of plant vision refers to the fact that plants are receptive to their visual environment, although the mechanism involved is quite distinct from the human visual system. The mechanism in plants is not well understood and has yet to be fully investigated. In this work, we have exploited the properties of TiO2 nanowires as a UV sensor to simulate the phenomenon of photosynthesis in order to come one step closer to understanding how plants see the world. To the best of our knowledge, this study is the first approach to emulate and depict plant vision. We have emulated the visual map perceived by plants with a single-pixel imaging system combined with a mechanical scanner. The image acquisition has been demonstrated for several electrolyte environments, in both transmissive and reflective configurations, in order to explore the different conditions in which plants perceive light.

  7. Ingestible wireless capsules for enhanced diagnostic inspection of gastrointestinal tract

    NASA Astrophysics Data System (ADS)

    Rasouli, Mahdi; Kencana, Andy Prima; Huynh, Van An; Ting, Eng Kiat; Lai, Joshua Chong Yue; Wong, Kai Juan; Tan, Su Lim; Phee, Soo Jay

    2011-03-01

    Wireless capsule endoscopy has become a common procedure for diagnostic inspection of gastrointestinal tract. This method offers a less-invasive alternative to traditional endoscopy by eliminating uncomfortable procedures of the traditional endoscopy. Moreover, it provides the opportunity for exploring inaccessible areas of the small intestine. Current capsule endoscopes, however, move by peristalsis and are not capable of detailed and on-demand inspection of desired locations. Here, we propose and develop two wireless endoscopes with maneuverable vision systems to enhance diagnosis of gastrointestinal disorders. The vision systems in these capsules are equipped with mechanical actuators to adjust the position of the camera. This may help to cover larger areas of the digestive tract and investigate desired locations. The preliminary experimental results showed that the developed platform could successfully communicate with the external control unit via human body and adjust the position of camera to limited degrees.

  8. Multiparameter vision testing apparatus

    NASA Technical Reports Server (NTRS)

    Hunt, S. R., Jr.; Homkes, R. J.; Poteate, W. B.; Sturgis, A. C. (Inventor)

    1975-01-01

    Compact vision testing apparatus is described for testing a large number of physiological characteristics of the eyes and visual system of a human subject. The head of the subject is inserted into a viewing port at one end of a light-tight housing containing various optical assemblies. Visual acuity and other refractive characteristics and ocular muscle balance characteristics of the eyes of the subject are tested by means of a retractable phoroptor assembly carried near the viewing port and a film cassette unit carried in the rearward portion of the housing (the latter selectively providing a variety of different visual targets which are viewed through the optical system of the phoroptor assembly). The visual dark adaptation characteristics and absolute brightness threshold of the subject are tested by means of a projector assembly which selectively projects one or both of a variable intensity fixation target and a variable intensity adaptation test field onto a viewing screen located near the top of the housing.

  9. The NASA Aviation Safety Program: Overview

    NASA Technical Reports Server (NTRS)

    Shin, Jaiwon

    2000-01-01

    In 1997, the United States set a national goal to reduce the fatal accident rate for aviation by 80% within ten years based on the recommendations by the Presidential Commission on Aviation Safety and Security. Achieving this goal will require the combined efforts of government, industry, and academia in the areas of technology research and development, implementation, and operations. To respond to the national goal, the National Aeronautics and Space Administration (NASA) has developed a program that will focus resources over a five year period on performing research and developing technologies that will enable improvements in many areas of aviation safety. The NASA Aviation Safety Program (AvSP) is organized into six research areas: Aviation System Modeling and Monitoring, System Wide Accident Prevention, Single Aircraft Accident Prevention, Weather Accident Prevention, Accident Mitigation, and Synthetic Vision. Specific project areas include Turbulence Detection and Mitigation, Aviation Weather Information, Weather Information Communications, Propulsion Systems Health Management, Control Upset Management, Human Error Modeling, Maintenance Human Factors, Fire Prevention, and Synthetic Vision Systems for Commercial, Business, and General Aviation aircraft. Research will be performed at all four NASA aeronautics centers and will be closely coordinated with Federal Aviation Administration (FAA) and other government agencies, industry, academia, as well as the aviation user community. This paper provides an overview of the NASA Aviation Safety Program goals, structure, and integration with the rest of the aviation community.

  10. A Sustained Proximity Network for Multi-Mission Lunar Exploration

    NASA Technical Reports Server (NTRS)

    Soloff, Jason A.; Noreen, Gary; Deutsch, Leslie; Israel, David

    2005-01-01

    Tbe Vision for Space Exploration calls for an aggressive sequence of robotic missions beginning in 2008 to prepare for a human return to the Moon by 2020, with the goal of establishing a sustained human presence beyond low Earth orbit. A key enabler of exploration is reliable, available communication and navigation capabilities to support both human and robotic missions. An adaptable, sustainable communication and navigation architecture has been developed by Goddard Space Flight Center and the Jet Propulsion Laboratory to support human and robotic lunar exploration through the next two decades. A key component of the architecture is scalable deployment, with the infrastructure evolving as needs emerge, allowing NASA and its partner agencies to deploy an interoperable communication and navigation system in an evolutionary way, enabling cost effective, highly adaptable systems throughout the lunar exploration program.

  11. Extracting heading and temporal range from optic flow: Human performance issues

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Perrone, John A.; Stone, Leland; Banks, Martin S.; Crowell, James A.

    1993-01-01

    Pilots are able to extract information about their vehicle motion and environmental structure from dynamic transformations in the out-the-window scene. In this presentation, we focus on the information in the optic flow which specifies vehicle heading and distance to objects in the environment, scaled to a temporal metric. In particular, we are concerned with modeling how the human operators extract the necessary information, and what factors impact their ability to utilize the critical information. In general, the psychophysical data suggest that the human visual system is fairly robust to degradations in the visual display, e.g., reduced contrast and resolution or restricted field of view. However, extraneous motion flow, i.e., introduced by sensor rotation, greatly compromises human performance. The implications of these models and data for enhanced/synthetic vision systems are discussed.

  12. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  13. A physiologically-based model for simulation of color vision deficiency.

    PubMed

    Machado, Gustavo M; Oliveira, Manuel M; Fernandes, Leandro A F

    2009-01-01

    Color vision deficiency (CVD) affects approximately 200 million people worldwide, compromising the ability of these individuals to effectively perform color and visualization-related tasks. This has a significant impact on their private and professional lives. We present a physiologically-based model for simulating color vision. Our model is based on the stage theory of human color vision and is derived from data reported in electrophysiological studies. It is the first model to consistently handle normal color vision, anomalous trichromacy, and dichromacy in a unified way. We have validated the proposed model through an experimental evaluation involving groups of color vision deficient individuals and normal color vision ones. Our model can provide insights and feedback on how to improve visualization experiences for individuals with CVD. It also provides a framework for testing hypotheses about some aspects of the retinal photoreceptors in color vision deficient individuals.

  14. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  15. Artificial Intelligence/Robotics Applications to Navy Aircraft Maintenance.

    DTIC Science & Technology

    1984-06-01

    other automatic machinery such as presses, molding machines , and numerically-controlled machine tools, just as people do. A-36...Robotics Technologies 3 B. Relevant AI Technologies 4 1. Expert Systems 4 2. Automatic Planning 4 3. Natural Language 5 4. Machine Vision...building machines that imitate human behavior. Artificial intelligence is concerned with the functions of the brain, whereas robotics include, in

  16. The dorsal "action" pathway.

    PubMed

    Gallivan, Jason P; Goodale, Melvyn A

    2018-01-01

    In 1992, Goodale and Milner proposed a division of labor in the visual pathways of the primate cerebral cortex. According to their account, the ventral pathway, which projects to occipitotemporal cortex, constructs our visual percepts, while the dorsal pathway, which projects to posterior parietal cortex, mediates the visual control of action. Although the framing of the two-visual-system hypothesis has not been without controversy, it is clear that vision for action and vision for perception have distinct computational requirements, and significant support for the proposed neuroanatomic division has continued to emerge over the last two decades from human neuropsychology, neuroimaging, behavioral psychophysics, and monkey neurophysiology. In this chapter, we review much of this evidence, with a particular focus on recent findings from human neuroimaging and monkey neurophysiology, demonstrating a specialized role for parietal cortex in visually guided behavior. But even though the available evidence suggests that dedicated circuits mediate action and perception, in order to produce adaptive goal-directed behavior there must be a close coupling and seamless integration of information processing across these two systems. We discuss such ventral-dorsal-stream interactions and argue that the two pathways play different, yet complementary, roles in the production of skilled behavior. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Safe traffic : Vision Zero on the move

    DOT National Transportation Integrated Search

    2006-03-01

    Vision Zero is composed of several basic : elements, each of which affects safety in : road traffic. These concerns ethics, human : capability and tolerance, responsibility, : scientific facts and a realisation that the : different components in the ...

  18. Detection of eviscerated poultry spleen enlargement by machine vision

    NASA Astrophysics Data System (ADS)

    Tao, Yang; Shao, June J.; Skeeles, John K.; Chen, Yud-Ren

    1999-01-01

    The size of a poultry spleen is an indication of whether the bird is wholesomeness or has a virus-related disease. This study explored the possibility of detecting poultry spleen enlargement with a computer imaging system to assist human inspectors in food safety inspections. Images of 45-day-old hybrid turkey internal viscera were taken using fluorescent and UV lighting systems. Image processing algorithms including linear transformation, morphological operations, and statistical analyses were developed to distinguish the spleen from its surroundings and then to detect abnormal spleens. Experimental results demonstrated that the imaging method could effectively distinguish spleens from other organ and intestine. Based on a total sample of 57 birds, the classification rates were 92% from a self-test set, and 95% from an independent test set for the correct detection of normal and abnormal birds. The methodology indicated the feasibility of using automated machine vision systems in the future to inspect internal organs and check the wholesomeness of poultry carcasses.

  19. Image processing system and method for recognizing and removing shadows from the image of a monitored scene

    DOEpatents

    Osbourn, Gordon C.

    1996-01-01

    The shadow contrast sensitivity of the human vision system is simulated by configuring information obtained from an image sensor so that the information may be evaluated with multiple pixel widths in order to produce a machine vision system able to distinguish between shadow edges and abrupt object edges. A second difference of the image intensity for each line of the image is developed and this second difference is used to screen out high frequency noise contributions from the final edge detection signals. These edge detection signals are constructed from first differences of the image intensity where the screening conditions are satisfied. The positional coincidence of oppositely signed maxima in the first difference signal taken from the right and the second difference signal taken from the left is used to detect the presence of an object edge. Alternatively, the effective number of responding operators (ENRO) may be utilized to determine the presence of object edges.

  20. A computer vision system for diagnosing scoliosis using moiré images.

    PubMed

    Batouche, M; Benlamri, R; Kholladi, M K

    1996-07-01

    For young people, scoliosis deformities are an evolving process which must be detected and treated as early as possible. The moiré technique is simple, inexpensive, not aggressive and especially convenient for detecting spinal deformations. Doctors make their diagnosis by analysing the symmetry of fringes obtained by such techniques. In this paper, we present a computer vision system for help diagnosing spinal deformations using noisy moiré images of the human back. The approach adopted in this paper consists of extracting fringe contours from moiré images, then localizing some anatomical features (the spinal column, lumbar hollow and shoulder blades) which are crucial for 3D surface generation carried out using Mota's relaxation operator. Finally, rules furnished by doctors are used to derive the kind of spinal deformation and to yield the diagnosis. The proposed system has been tested on a set of noisy moiré images, and the experimental result have shown its robustness and reliability for the recognition of most scoliosis deformities.

  1. EnVision+, a new dextran polymer-based signal enhancement technique for in situ hybridization (ISH).

    PubMed

    Wiedorn, K H; Goldmann, T; Henne, C; Kühl, H; Vollmer, E

    2001-09-01

    Seventy paraffin-embedded cervical biopsy specimens and condylomata were tested for the presence of human papillomavirus (HPV) by conventional in situ hybridization (ISH) and ISH with subsequent signal amplification. Signal amplification was performed either by a commercial biotinyl-tyramide-based detection system [GenPoint (GP)] or by the novel two-layer dextran polymer visualization system EnVision+ (EV), in which both EV-horseradish peroxidase (EV-HRP) and EV-alkaline phosphatase (EV-AP) were applied. We could demonstrate for the first time, that EV in combination with preceding ISH results in a considerable increase in signal intensity and sensitivity without loss of specificity compared to conventional ISH. Compared to GP, EV revealed a somewhat lower sensitivity, as measured by determination of the integrated optical density (IOD) of the positively stained cells. However, EV is easier to perform, requires a shorter assay time, and does not raise the background problems that may be encountered with biotinyl-tyramide-based amplification systems. (J Histochem Cytochem 49:1067-1071, 2001)

  2. Measuring the learning capacity of organisations: development and factor analysis of the Questionnaire for Learning Organizations.

    PubMed

    Oudejans, S C C; Schippers, G M; Schramade, M H; Koeter, M W J; van den Brink, W

    2011-04-01

    To investigate internal consistency and factor structure of a questionnaire measuring learning capacity based on Senge's theory of the five disciplines of a learning organisation: Personal Mastery, Mental Models, Shared Vision, Team Learning, and Systems Thinking. Cross-sectional study. Substance-abuse treatment centres (SATCs) in The Netherlands. A total of 293 SATC employees from outpatient and inpatient treatment departments, financial and human resources departments. Psychometric properties of the Questionnaire for Learning Organizations (QLO), including factor structure, internal consistency, and interscale correlations. A five-factor model representing the five disciplines of Senge showed good fit. The scales for Personal Mastery, Shared Vision and Team Learning had good internal consistency, but the scales for Systems Thinking and Mental Models had low internal consistency. The proposed five-factor structure was confirmed in the QLO, which makes it a promising instrument to assess learning capacity in teams. The Systems Thinking and the Mental Models scales have to be revised. Future research should be aimed at testing criterion and discriminatory validity.

  3. A methodology for coupling a visual enhancement device to human visual attention

    NASA Astrophysics Data System (ADS)

    Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman

    2009-02-01

    The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.

  4. Color vision test for dichromatic and trichromatic macaque monkeys.

    PubMed

    Koida, Kowa; Yokoi, Isao; Okazawa, Gouki; Mikami, Akichika; Widayati, Kanthi Arum; Miyachi, Shigehiro; Komatsu, Hidehiko

    2013-11-01

    Dichromacy is a color vision defect in which one of the three cone photoreceptors is absent. Individuals with dichromacy are called dichromats (or sometimes "color-blind"), and their color discrimination performance has contributed significantly to our understanding of color vision. Macaque monkeys, which normally have trichromatic color vision that is nearly identical to humans, have been used extensively in neurophysiological studies of color vision. In the present study we employed two tests, a pseudoisochromatic color discrimination test and a monochromatic light detection test, to compare the color vision of genetically identified dichromatic macaques (Macaca fascicularis) with that of normal trichromatic macaques. In the color discrimination test, dichromats could not discriminate colors along the protanopic confusion line, though trichromats could. In the light detection test, the relative thresholds for longer wavelength light were higher in the dichromats than the trichromats, indicating dichromats to be less sensitive to longer wavelength light. Because the dichromatic macaque is very rare, the present study provides valuable new information on the color vision behavior of dichromatic macaques, which may be a useful animal model of human dichromacy. The behavioral tests used in the present study have been previously used to characterize the color behaviors of trichromatic as well as dichromatic new world monkeys. The present results show that comparative studies of color vision employing similar tests may be feasible to examine the difference in color behaviors between trichromatic and dichromatic individuals, although the genetic mechanisms of trichromacy/dichromacy is quite different between new world monkeys and macaques.

  5. Dan Goldin Presentation: Pathway to the Future

    NASA Technical Reports Server (NTRS)

    1999-01-01

    In the "Path to the Future" presentation held at NASA's Langley Center on March 31, 1999, NASA's Administrator Daniel S. Goldin outlined the future direction and strategies of NASA in relation to the general space exploration enterprise. NASA's Vision, Future System Characteristics, Evolutions of Engineering, and Revolutionary Changes are the four main topics of the presentation. In part one, the Administrator talks in detail about NASA's vision in relation to the NASA Strategic Activities that are Space Science, Earth Science, Human Exploration, and Aeronautics & Space Transportation. Topics discussed in this section include: space science for the 21st century, flying in mars atmosphere (mars plane), exploring new worlds, interplanetary internets, earth observation and measurements, distributed information-system-in-the-sky, science enabling understanding and application, space station, microgravity, science and exploration strategies, human mars mission, advance space transportation program, general aviation revitalization, and reusable launch vehicles. In part two, he briefly talks about the future system characteristics. He discusses major system characteristics like resiliencey, self-sufficiency, high distribution, ultra-efficiency, and autonomy and the necessity to overcome any distance, time, and extreme environment barriers. Part three of Mr. Goldin's talk deals with engineering evolution, mainly evolution in the Computer Aided Design (CAD)/Computer Aided Engineering (CAE) systems. These systems include computer aided drafting, computerized solid models, virtual product development (VPD) systems, networked VPD systems, and knowledge enriched networked VPD systems. In part four, the last part, the Administrator talks about the need for revolutionary changes in communication and networking areas of a system. According to the administrator, the four major areas that need cultural changes in the creativity process are human-centered computing, an infrastructure for distributed collaboration, rapid synthesis and simulation tools, and life-cycle integration and validation. Mr. Goldin concludes his presentation with the following maxim "Collaborate, Integrate, Innovate or Stagnate and Evaporate." He also answers some questions after the presentation.

  6. Automatic image orientation detection via confidence-based integration of low-level and semantic cues.

    PubMed

    Luo, Jiebo; Boutell, Matthew

    2005-05-01

    Automatic image orientation detection for natural images is a useful, yet challenging research topic. Humans use scene context and semantic object recognition to identify the correct image orientation. However, it is difficult for a computer to perform the task in the same way because current object recognition algorithms are extremely limited in their scope and robustness. As a result, existing orientation detection methods were built upon low-level vision features such as spatial distributions of color and texture. Discrepant detection rates have been reported for these methods in the literature. We have developed a probabilistic approach to image orientation detection via confidence-based integration of low-level and semantic cues within a Bayesian framework. Our current accuracy is 90 percent for unconstrained consumer photos, impressive given the findings of a psychophysical study conducted recently. The proposed framework is an attempt to bridge the gap between computer and human vision systems and is applicable to other problems involving semantic scene content understanding.

  7. A vision-based fall detection algorithm of human in indoor environment

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Guo, Yongcai

    2017-02-01

    Elderly care becomes more and more prominent in China as the population is aging fast and the number of aging population is large. Falls, as one of the biggest challenges in elderly guardianship system, have a serious impact on both physical health and mental health of the aged. Based on feature descriptors, such as aspect ratio of human silhouette, velocity of mass center, moving distance of head and angle of the ultimate posture, a novel vision-based fall detection method was proposed in this paper. A fast median method of background modeling with three frames was also suggested. Compared with the conventional bounding box and ellipse method, the novel fall detection technique is not only applicable for recognizing the fall behaviors end of lying down but also suitable for detecting the fall behaviors end of kneeling down and sitting down. In addition, numerous experiment results showed that the method had a good performance in recognition accuracy on the premise of not adding the cost of time.

  8. The International Space Station: Stepping-stone to Exploration

    NASA Technical Reports Server (NTRS)

    Gerstenmaier, William H.; Kelly, Brian K.; Kelly, Brian K.

    2005-01-01

    As the Space Shuttle returns to flight this year, major reconfiguration and assembly of the International Space Station continues as the United States and our 5 International Partners resume building and carry on operating this impressive Earth-orbiting research facility. In his January 14, 2004, speech announcing a new vision for America's space program, President Bush ratified the United States' commitment to completing construction of the ISS by 2010. The current ongoing research aboard the Station on the long-term effects of space travel on human physiology will greatly benefit human crews to venture through the vast voids of space for months at a time. The continual operation of ISS leads to new knowledge about the design, development and operation of system and hardware that will be utilized in the development of new deep-space vehicles needed to fulfill the Vision for Exploration. This paper will provide an overview of the ISS Program, including a review of the events of the past year, as well as plans for next year and the future.

  9. Monovision techniques for telerobots

    NASA Technical Reports Server (NTRS)

    Goode, P. W.; Carnils, K.

    1987-01-01

    The primary task of the vision sensor in a telerobotic system is to provide information about the position of the system's effector relative to objects of interest in its environment. The subtasks required to perform the primary task include image segmentation, object recognition, and object location and orientation in some coordinate system. The accomplishment of the vision task requires the appropriate processing tools and the system methodology to effectively apply the tools to the subtasks. The functional structure of the telerobotic vision system used in the Langley Research Center's Intelligent Systems Research Laboratory is discussed as well as two monovision techniques for accomplishing the vision subtasks.

  10. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  11. Science and the Constellation Systems Program Office

    NASA Technical Reports Server (NTRS)

    Mendell, Wendell

    2007-01-01

    An underlying tension has existed throughout the history of NASA between the human spaceflight programs and the external scientific constituencies of the robotic exploration programs. The large human space projects have been perceived as squandering resources that might otherwise be utilized for scientific discoveries. In particular, the history of the relationship of science to the International Space Station Program has not been a happy one. The leadership of the Constellation Program Office, created in NASA in October, 2005, asked me to serve on the Program Manager s staff as a liaison to the science community. Through the creation of my position, the Program Manager wanted to communicate and elucidate decisions inside the program to the scientific community and, conversely, ensure that the community had a voice at the highest levels within the program. Almost all of my technical contributions at NASA, dating back to the Apollo Program, has been within the auspices of what is now known as the Science Mission Directorate. However, working at the Johnson Space Center, where human spaceflight is the principal activity, has given me a good deal of incidental contact and some more direct exposure through management positions to the structures and culture of human spaceflight programs. I entered the Constellation family somewhat naive but not uninformed. In addition to my background in NASA science, I have also written extensively over the past 25 years on the topic of human exploration of the Moon and Mars. (See, for example, Mendell, 1985). I have found that my scientific colleagues generally have little understanding of the structure and processes of a NASA program office; and many of them do not recognize the name, Constellation. In many respects, the international ILEWG community is better informed. Nevertheless, some NASA decision processes on the role of science, particularly with respect to the formulation of a lunar surface architecture, are not well known, even in ILEWG. At the recent annual Lunar and Planetary Science Conference, I reviewed the evolution of the program as a function of Agency leadership and the constraints put on NASA by the President in his 2004 announcement. I plan to continue my long-time ILEWG tradition of reporting a personal view of the state of development of human exploration of the solar system, this time coming from within the program office tasked to implement the vision for the United States. The current NASA implementation of the Vision for Space Exploration is consistent with certain classical scenarios that have been discussed extensively in the literature. I will discuss the role of science within the Vision, both from official policy and from a de facto interaction. While science goals are not officially driving the implementation of the Vision, the tools of scientific exploration are integral to defining the extraterrestrial design environments. In this respect the sharing of results from international missions to the Moon can make significant contributions to the success of the future human activities.

  12. Robonaut: A Robotic Astronaut Assistant

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O.; Diftler, Myron A.

    2001-01-01

    NASA's latest anthropomorphic robot, Robonaut, has reached a milestone in its capability. This highly dexterous robot, designed to assist astronauts in space, is now performing complex tasks at the Johnson Space Center that could previously only be carried out by humans. With 43 degrees of freedom, Robonaut is the first humanoid built for space and incorporates technology advances in dexterous hands, modular manipulators, lightweight materials, and telepresence control systems. Robonaut is human size, has a three degree of freedom (DOF) articulated waist, and two, seven DOF arms, giving it an impressive work space for interacting with its environment. Its two, five fingered hands allow manipulation of a wide range of tools. A pan/tilt head with multiple stereo camera systems provides data for both teleoperators and computer vision systems.

  13. Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition

    NASA Astrophysics Data System (ADS)

    Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso

    2005-04-01

    Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.

  14. Vision, healing brush, and fiber bundles

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor

    2005-03-01

    The Healing Brush is a tool introduced for the first time in Adobe Photoshop (2002) that removes defects in images by seamless cloning (gradient domain fusion). The Healing Brush algorithms are built on a new mathematical approach that uses Fibre Bundles and Connections to model the representation of images in the visual system. Our mathematical results are derived from first principles of human vision, related to adaptation transforms of von Kries type and Retinex theory. In this paper we present the new result of Healing in arbitrary color space. In addition to supporting image repair and seamless cloning, our approach also produces the exact solution to the problem of high dynamic range compression of17 and can be applied to other image processing algorithms.

  15. The New HIT: Human Health Information Technology.

    PubMed

    Leung, Tiffany I; Goldstein, Mary K; Musen, Mark A; Cronkite, Ruth; Chen, Jonathan H; Gottlieb, Assaf; Leitersdorf, Eran

    2017-01-01

    Humanism in medicine is defined as health care providers' attitudes and actions that demonstrate respect for patients' values and concerns in relation to their social, psychological and spiritual life domains. Specifically, humanistic clinical medicine involves showing respect for the patient, building a personal connection, and eliciting and addressing a patient's emotional response to illness. Health information technology (IT) often interferes with humanistic clinical practice, potentially disabling these core aspects of the therapeutic patient-physician relationship. Health IT has evolved rapidly in recent years - and the imperative to maintain humanism in practice has never been greater. In this vision paper, we aim to discuss why preserving humanism is imperative in the design and implementation of health IT systems.

  16. Building a Shared Definitional Model of Long Duration Human Spaceflight

    NASA Technical Reports Server (NTRS)

    Arias, Diana; Orr, Martin; Whitmire, Alexandra; Leveton, Lauren; Sandoval, Luis

    2012-01-01

    Objective: To establish the need for a shared definitional model of long duration human spaceflight, that would provide a framework and vision to facilitate communication, research and practice In 1956, on the eve of human space travel, Hubertus Strughold first proposed a "simple classification of the present and future stages of manned flight" that identified key factors, risks and developmental stages for the evolutionary journey ahead. As we look to new destinations, we need a current shared working definitional model of long duration human space flight to help guide our path. Here we describe our preliminary findings and outline potential approaches for the future development of a definition and broader classification system

  17. Establishing a shared vision in your organization. Winning strategies to empower your team members.

    PubMed

    Rinke, W J

    1989-01-01

    Today's health-care climate demands that you manage your human resources more effectively. Meeting the dual challenges of providing more with less requires that you tap the vast hidden resources that reside in every one of your team members. Harnessing these untapped energies requires that all of your employees clearly understand the purpose, direction, and the desired future state of your laboratory. Once this image is widely shared, your team members will know their roles in the organization and the contributions they can make to attaining the organization's vision. This shared vision empowers people and enhances their self-esteem as they recognize they are accomplishing a worthy goal. You can create and install a shared vision in your laboratory by adhering to a five-step process. The result will be a unity of purpose that will release the untapped human resources in your organization so that you can do more with less.

  18. B. F. Skinner's Utopian Vision: Behind and Beyond Walden Two

    PubMed Central

    Altus, Deborah E; Morris, Edward K

    2009-01-01

    This paper addresses B. F. Skinner's utopian vision for enhancing social justice and human well-being in his 1948 novel, Walden Two. In the first part, we situate the book in its historical, intellectual, and social context of the utopian genre, address critiques of the book's premises and practices, and discuss the fate of intentional communities patterned on the book. The central point here is that Skinner's utopian vision was not any of Walden Two's practices, except one: the use of empirical methods to search for and discover practices that worked. In the second part, we describe practices in Skinner's book that advance social justice and human well-being under the themes of health, wealth, and wisdom, and then show how the subsequent literature in applied behavior analysis supports Skinner's prescience. Applied behavior analysis is a measure of the success of Skinner's utopian vision: to experiment. PMID:22478531

  19. Intelligent Vision On The SM9O Mini-Computer Basis And Applications

    NASA Astrophysics Data System (ADS)

    Hawryszkiw, J.

    1985-02-01

    Distinction has to be made between image processing and vision Image processing finds its roots in the strong tradition of linear signal processing and promotes geometrical transform techniques, such as fi I tering , compression, and restoration. Its purpose is to transform an image for a human observer to easily extract from that image information significant for him. For example edges after a gradient operator, or a specific direction after a directional filtering operation. Image processing consists in fact in a set of local or global space-time transforms. The interpretation of the final image is done by the human observer. The purpose of vision is to extract the semantic content of the image. The machine can then understand that content, and run a process of decision, which turns into an action. Thus, intel I i gent vision depends on - Image processing - Pattern recognition - Artificial intel I igence

  20. Contact Lenses for Color Blindness.

    PubMed

    Badawy, Abdel-Rahman; Hassan, Muhammad Umair; Elsherif, Mohamed; Ahmed, Zubair; Yetisen, Ali K; Butt, Haider

    2018-06-01

    Color vision deficiency (color blindness) is an inherited genetic ocular disorder. While no cure for this disorder currently exists, several methods can be used to increase the color perception of those affected. One such method is the use of color filtering glasses which are based on Bragg filters. While these glasses are effective, they are high cost, bulky, and incompatible with other vision correction eyeglasses. In this work, a rhodamine derivative is incorporated in commercial contact lenses to filter out the specific wavelength bands (≈545-575 nm) to correct color vision blindness. The biocompatibility assessment of the dyed contact lenses in human corneal fibroblasts and human corneal epithelial cells shows no toxicity and cell viability remains at 99% after 72 h. This study demonstrates the potential of the dyed contact lenses in wavelength filtering and color vision deficiency management. © 2018 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Non-visual crypsis: a review of the empirical evidence for camouflage to senses other than vision.

    PubMed

    Ruxton, Graeme D

    2009-02-27

    I review the evidence that organisms have adaptations that confer difficulty of detection by predators and parasites that seek their targets primarily using sensory systems other than vision. In other words, I will answer the question of whether crypsis is a concept that can usefully be applied to non-visual sensory perception. Probably because vision is such an important sensory system in humans, research in this field is sparse. Thus, at present we have very few examples of chemical camouflage, and even these contain some ambiguity in deciding whether they are best seen as examples of background matching or mimicry. There are many examples of organisms that are adaptively silent at times or in locations when or where predation risk is higher or in response to detection of a predator. By contrast, evidence that the form (rather than use) of vocalizations and other sound-based signals has been influenced by issues of reducing detectability to unintended receivers is suggestive rather than conclusive. There is again suggestive but not completely conclusive evidence for crypsis against electro-sensing predators. Lastly, mechanoreception is highly understudied in this regard, but there are scattered reports that strongly suggest that some species can be thought of as being adapted to be cryptic in this modality. Hence, I conclude that crypsis is a concept that can usefully be applied to senses other than vision, and that this is a field very much worthy of more investigation.

  2. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

    NASA Astrophysics Data System (ADS)

    Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas

    2018-05-01

    In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average calibration-difference of 22 cm. Using NaI and He-3 detectors in place of the EJ-309, the calibration-difference was 52 cm for NaI and 75 cm for He-3. The algorithm is not detector dependent; however, from these results it was determined that detector dependent adjustments are required.

  3. Human pheromones and sexual attraction.

    PubMed

    Grammer, Karl; Fink, Bernhard; Neave, Nick

    2005-02-01

    Olfactory communication is very common amongst animals, and since the discovery of an accessory olfactory system in humans, possible human olfactory communication has gained considerable scientific interest. The importance of the human sense of smell has by far been underestimated in the past. Humans and other primates have been regarded as primarily 'optical animals' with highly developed powers of vision but a relatively undeveloped sense of smell. In recent years this assumption has undergone major revision. Several studies indicate that humans indeed seem to use olfactory communication and are even able to produce and perceive certain pheromones; recent studies have found that pheromones may play an important role in the behavioural and reproduction biology of humans. In this article we review the present evidence of the effect of human pheromones and discuss the role of olfactory cues in human sexual behaviour.

  4. Roughness based perceptual analysis towards digital skin imaging system with haptic feedback.

    PubMed

    Kim, K

    2016-08-01

    To examine psoriasis or atopic eczema, analyzing skin roughness by palpation is essential to precisely diagnose skin diseases. However, optical sensor based skin imaging systems do not allow dermatologists to touch skin images. To solve the problem, a new haptic rendering technology that can accurately display skin roughness must be developed. In addition, the rendering algorithm must be able to filter spatial noises created during 2D to 3D image conversion without losing the original roughness on the skin image. In this study, a perceptual way to design a noise filter that will remove spatial noises and in the meantime recover maximized roughness is introduced by understanding human sensitivity on surface roughness. A visuohaptic rendering system that can provide a user with seeing and touching digital skin surface roughness has been developed including a geometric roughness estimation method from a meshed surface. In following, a psychophysical experiment was designed and conducted with 12 human subjects to measure human perception with the developed visual and haptic interfaces to examine surface roughness. From the psychophysical experiment, it was found that touch is more sensitive at lower surface roughness, and vice versa. Human perception with both senses, vision and touch, becomes less sensitive to surface distortions as roughness increases. When interact with both channels, visual and haptic interfaces, the performance to detect abnormalities on roughness is greatly improved by sensory integration with the developed visuohaptic rendering system. The result can be used as a guideline to design a noise filter that can perceptually remove spatial noises while recover maximized roughness values from a digital skin image obtained by optical sensors. In addition, the result also confirms that the developed visuohaptic rendering system can help dermatologists or skin care professionals examine skin conditions by using vision and touch at the same time. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Detecting Motion from a Moving Platform; Phase 3: Unification of Control and Sensing for More Advanced Situational Awareness

    DTIC Science & Technology

    2011-11-01

    RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica

  6. Materials Challenges in Space Exploration

    NASA Technical Reports Server (NTRS)

    Vickers, John; Shah, Sandeep

    2005-01-01

    The new vision of space exploration encompasses a broad range of human and robotic missions to the Moon, Mars and beyond. Extended human space travel requires high reliability and high performance systems for propulsion, vehicle structures, thermal and radiation protection, crew habitats and health monitoring. Advanced materials and processing technologies are necessary to meet the exploration mission requirements. Materials and processing technologies must be sufficiently mature before they can be inserted into a development program leading to an exploration mission. Exploration will be more affordable by in-situ utilization of materials on the Moon and Mars.

  7. Position and Orientation Tracking in a Ubiquitous Monitoring System for Parkinson Disease Patients With Freezing of Gait Symptom

    PubMed Central

    Català, Andreu; Rodríguez Martín, Daniel; van der Aa, Nico; Chen, Wei; Rauterberg, Matthias

    2013-01-01

    Background Freezing of gait (FoG) is one of the most disturbing and least understood symptoms in Parkinson disease (PD). Although the majority of existing assistive systems assume accurate detections of FoG episodes, the detection itself is still an open problem. The specificity of FoG is its dependency on the context of a patient, such as the current location or activity. Knowing the patient's context might improve FoG detection. One of the main technical challenges that needs to be solved in order to start using contextual information for FoG detection is accurate estimation of the patient's position and orientation toward key elements of his or her indoor environment. Objective The objectives of this paper are to (1) present the concept of the monitoring system, based on wearable and ambient sensors, which is designed to detect FoG using the spatial context of the user, (2) establish a set of requirements for the application of position and orientation tracking in FoG detection, (3) evaluate the accuracy of the position estimation for the tracking system, and (4) evaluate two different methods for human orientation estimation. Methods We developed a prototype system to localize humans and track their orientation, as an important prerequisite for a context-based FoG monitoring system. To setup the system for experiments with real PD patients, the accuracy of the position and orientation tracking was assessed under laboratory conditions in 12 participants. To collect the data, the participants were asked to wear a smartphone, with and without known orientation around the waist, while walking over a predefined path in the marked area captured by two Kinect cameras with non-overlapping fields of view. Results We used the root mean square error (RMSE) as the main performance measure. The vision based position tracking algorithm achieved RMSE = 0.16 m in position estimation for upright standing people. The experimental results for the proposed human orientation estimation methods demonstrated the adaptivity and robustness to changes in the smartphone attachment position, when the fusion of both vision and inertial information was used. Conclusions The system achieves satisfactory accuracy on indoor position tracking for the use in the FoG detection application with spatial context. The combination of inertial and vision information has the potential for correct patient heading estimation even when the inertial wearable sensor device is put into an a priori unknown position. PMID:25098265

  8. Position and orientation tracking in a ubiquitous monitoring system for Parkinson disease patients with freezing of gait symptom.

    PubMed

    Takač, Boris; Català, Andreu; Rodríguez Martín, Daniel; van der Aa, Nico; Chen, Wei; Rauterberg, Matthias

    2013-07-15

    Freezing of gait (FoG) is one of the most disturbing and least understood symptoms in Parkinson disease (PD). Although the majority of existing assistive systems assume accurate detections of FoG episodes, the detection itself is still an open problem. The specificity of FoG is its dependency on the context of a patient, such as the current location or activity. Knowing the patient's context might improve FoG detection. One of the main technical challenges that needs to be solved in order to start using contextual information for FoG detection is accurate estimation of the patient's position and orientation toward key elements of his or her indoor environment. The objectives of this paper are to (1) present the concept of the monitoring system, based on wearable and ambient sensors, which is designed to detect FoG using the spatial context of the user, (2) establish a set of requirements for the application of position and orientation tracking in FoG detection, (3) evaluate the accuracy of the position estimation for the tracking system, and (4) evaluate two different methods for human orientation estimation. We developed a prototype system to localize humans and track their orientation, as an important prerequisite for a context-based FoG monitoring system. To setup the system for experiments with real PD patients, the accuracy of the position and orientation tracking was assessed under laboratory conditions in 12 participants. To collect the data, the participants were asked to wear a smartphone, with and without known orientation around the waist, while walking over a predefined path in the marked area captured by two Kinect cameras with non-overlapping fields of view. We used the root mean square error (RMSE) as the main performance measure. The vision based position tracking algorithm achieved RMSE = 0.16 m in position estimation for upright standing people. The experimental results for the proposed human orientation estimation methods demonstrated the adaptivity and robustness to changes in the smartphone attachment position, when the fusion of both vision and inertial information was used. The system achieves satisfactory accuracy on indoor position tracking for the use in the FoG detection application with spatial context. The combination of inertial and vision information has the potential for correct patient heading estimation even when the inertial wearable sensor device is put into an a priori unknown position.

  9. Exploration Architecture Options - ECLSS, EVA, TCS Implications

    NASA Technical Reports Server (NTRS)

    Chambliss, Joe; Henninger, Don; Lawrence, Carl

    2009-01-01

    Many options for exploration of the Moon and Mars have been identified and evaluated since the Vision for Space Exploration VSE was announced in 2004. Lunar architectures have been identified and addressed in the Lunar Surface Systems team to establish options for how to get to and then inhabit and explore the moon. The Augustine Commission evaluated human space flight for the Obama administration and identified many options for how to conduct human spaceflight in the future. This paper will evaluate the options for exploration of the moon and Mars and those of the Augustine human spaceflight commission for the implications of each architecture on the Environmental Control and Life Support, ExtraVehicular Activity and Thermal Control systems. The advantages and disadvantages of each architecture and options are presented.

  10. Energy Storage: Batteries and Fuel Cells for Exploration

    NASA Technical Reports Server (NTRS)

    Manzo, Michelle A.; Miller, Thomas B.; Hoberecht, Mark A.; Baumann, Eric D.

    2007-01-01

    NASA's Vision for Exploration requires safe, human-rated, energy storage technologies with high energy density, high specific energy and the ability to perform in a variety of unique environments. The Exploration Technology Development Program is currently supporting the development of battery and fuel cell systems that address these critical technology areas. Specific technology efforts that advance these systems and optimize their operation in various space environments are addressed in this overview of the Energy Storage Technology Development Project. These technologies will support a new generation of more affordable, more reliable, and more effective space systems.

  11. Adaptive control for eye-gaze input system

    NASA Astrophysics Data System (ADS)

    Zhao, Qijie; Tu, Dawei; Yin, Hairong

    2004-01-01

    The characteristics of the vision-based human-computer interaction system have been analyzed, and the practical application and its limited factors at present time have also been mentioned. The information process methods have been put forward. In order to make the communication flexible and spontaneous, the algorithms to adaptive control of user"s head movement has been designed, and the events-based methods and object-oriented computer language is used to develop the system software, by experiment testing, we found that under given condition, these methods and algorithms can meet the need of the HCI.

  12. Human vision is determined based on information theory.

    PubMed

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-11-03

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

  13. Human vision is determined based on information theory

    NASA Astrophysics Data System (ADS)

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-11-01

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

  14. Human vision is determined based on information theory

    PubMed Central

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-01-01

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition. PMID:27808236

  15. Development of a Machine-Vision System for Recording of Force Calibration Data

    NASA Astrophysics Data System (ADS)

    Heamawatanachai, Sumet; Chaemthet, Kittipong; Changpan, Tawat

    This paper presents the development of a new system for recording of force calibration data using machine vision technology. Real time camera and computer system were used to capture images of the reading from the instruments during calibration. Then, the measurement images were transformed and translated to numerical data using optical character recognition (OCR) technique. These numerical data along with raw images were automatically saved to memories as the calibration database files. With this new system, the human error of recording would be eliminated. The verification experiments were done by using this system for recording the measurement results from an amplifier (DMP 40) with load cell (HBM-Z30-10kN). The NIMT's 100-kN deadweight force standard machine (DWM-100kN) was used to generate test forces. The experiments setup were done in 3 categories; 1) dynamics condition (record during load changing), 2) statics condition (record during fix load), and 3) full calibration experiments in accordance with ISO 376:2011. The captured images from dynamics condition experiment gave >94% without overlapping of number. The results from statics condition experiment were >98% images without overlapping. All measurement images without overlapping were translated to number by the developed program with 100% accuracy. The full calibration experiments also gave 100% accurate results. Moreover, in case of incorrect translation of any result, it is also possible to trace back to the raw calibration image to check and correct it. Therefore, this machine-vision-based system and program should be appropriate for recording of force calibration data.

  16. Vehicle-based vision sensors for intelligent highway systems

    NASA Astrophysics Data System (ADS)

    Masaki, Ichiro

    1989-09-01

    This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.

  17. Accelerating the Development of 21st-Century Toxicology: Outcome of a Human Toxicology Project Consortium Workshop

    PubMed Central

    Stephens, Martin L.; Barrow, Craig; Andersen, Melvin E.; Boekelheide, Kim; Carmichael, Paul L.; Holsapple, Michael P.; Lafranconi, Mark

    2012-01-01

    The U.S. National Research Council (NRC) report on “Toxicity Testing in the 21st century” calls for a fundamental shift in the way that chemicals are tested for human health effects and evaluated in risk assessments. The new approach would move toward in vitro methods, typically using human cells in a high-throughput context. The in vitro methods would be designed to detect significant perturbations to “toxicity pathways,” i.e., key biological pathways that, when sufficiently perturbed, lead to adverse health outcomes. To explore progress on the report’s implementation, the Human Toxicology Project Consortium hosted a workshop on 9–10 November 2010 in Washington, DC. The Consortium is a coalition of several corporations, a research institute, and a non-governmental organization dedicated to accelerating the implementation of 21st-century Toxicology as aligned with the NRC vision. The goal of the workshop was to identify practical and scientific ways to accelerate implementation of the NRC vision. The workshop format consisted of plenary presentations, breakout group discussions, and concluding commentaries. The program faculty was drawn from industry, academia, government, and public interest organizations. Most presentations summarized ongoing efforts to modernize toxicology testing and approaches, each with some overlap with the NRC vision. In light of these efforts, the workshop identified recommendations for accelerating implementation of the NRC vision, including greater strategic coordination and planning across projects (facilitated by a steering group), the development of projects that test the proof of concept for implementation of the NRC vision, and greater outreach and communication across stakeholder communities. PMID:21948868

  18. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  19. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  20. Wireless sensor systems for sense/decide/act/communicate.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Nina M.; Cushner, Adam; Baker, James A.

    2003-12-01

    After 9/11, the United States (U.S.) was suddenly pushed into challenging situations they could no longer ignore as simple spectators. The War on Terrorism (WoT) was suddenly ignited and no one knows when this war will end. While the government is exploring many existing and potential technologies, the area of wireless Sensor networks (WSN) has emerged as a foundation for establish future national security. Unlike other technologies, WSN could provide virtual presence capabilities needed for precision awareness and response in military, intelligence, and homeland security applications. The Advance Concept Group (ACG) vision of Sense/Decide/Act/Communicate (SDAC) sensor system is an instantiationmore » of the WSN concept that takes a 'systems of systems' view. Each sensing nodes will exhibit the ability to: Sense the environment around them, Decide as a collective what the situation of their environment is, Act in an intelligent and coordinated manner in response to this situational determination, and Communicate their actions amongst each other and to a human command. This LDRD report provides a review of the research and development done to bring the SDAC vision closer to reality.« less

  1. Automated intelligent video surveillance system for ships

    NASA Astrophysics Data System (ADS)

    Wei, Hai; Nguyen, Hieu; Ramu, Prakash; Raju, Chaitanya; Liu, Xiaoqing; Yadegar, Jacob

    2009-05-01

    To protect naval and commercial ships from attack by terrorists and pirates, it is important to have automatic surveillance systems able to detect, identify, track and alert the crew on small watercrafts that might pursue malicious intentions, while ruling out non-threat entities. Radar systems have limitations on the minimum detectable range and lack high-level classification power. In this paper, we present an innovative Automated Intelligent Video Surveillance System for Ships (AIVS3) as a vision-based solution for ship security. Capitalizing on advanced computer vision algorithms and practical machine learning methodologies, the developed AIVS3 is not only capable of efficiently and robustly detecting, classifying, and tracking various maritime targets, but also able to fuse heterogeneous target information to interpret scene activities, associate targets with levels of threat, and issue the corresponding alerts/recommendations to the man-in- the-loop (MITL). AIVS3 has been tested in various maritime scenarios and shown accurate and effective threat detection performance. By reducing the reliance on human eyes to monitor cluttered scenes, AIVS3 will save the manpower while increasing the accuracy in detection and identification of asymmetric attacks for ship protection.

  2. Information theory analysis of sensor-array imaging systems for computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  3. Neural network expert system for X-ray analysis of welded joints

    NASA Astrophysics Data System (ADS)

    Kozlov, V. V.; Lapik, N. V.; Popova, N. V.

    2018-03-01

    The use of intelligent technologies for the automated analysis of product quality is one of the main trends in modern machine building. At the same time, rapid development in various spheres of human activity is experienced by methods associated with the use of artificial neural networks, as the basis for building automated intelligent diagnostic systems. Technologies of machine vision allow one to effectively detect the presence of certain regularities in the analyzed designation, including defects of welded joints according to radiography data.

  4. 3 CFR 8635 - Proclamation 8635 of March 4, 2011. Save Your Vision Week, 2011

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Americans to take action to safeguard their eyesight. Vision is important to our everyday activities, and we... of Health and Human Services’ science-based agenda to prevent disease and promote health, our country...

  5. Machine vision systems using machine learning for industrial product inspection

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  6. Understanding, creating, and managing complex techno-socio-economic systems: Challenges and perspectives

    NASA Astrophysics Data System (ADS)

    Helbing, D.; Balietti, S.; Bishop, S.; Lukowicz, P.

    2011-05-01

    This contribution reflects on the comments of Peter Allen [1], Bikas K. Chakrabarti [2], Péter Érdi [3], Juval Portugali [4], Sorin Solomon [5], and Stefan Thurner [6] on three White Papers (WP) of the EU Support Action Visioneer (www.visioneer.ethz.ch). These White Papers are entitled "From Social Data Mining to Forecasting Socio-Economic Crises" (WP 1) [7], "From Social Simulation to Integrative System Design" (WP 2) [8], and "How to Create an Innovation Accelerator" (WP 3) [9]. In our reflections, the need and feasibility of a "Knowledge Accelerator" is further substantiated by fundamental considerations and recent events around the globe. newpara The Visioneer White Papers propose research to be carried out that will improve our understanding of complex techno-socio-economic systems and their interaction with the environment. Thereby, they aim to stimulate multi-disciplinary collaborations between ICT, the social sciences, and complexity science. Moreover, they suggest combining the potential of massive real-time data, theoretical models, large-scale computer simulations and participatory online platforms. By doing so, it would become possible to explore various futures and to expand the limits of human imagination when it comes to the assessment of the often counter-intuitive behavior of these complex techno-socio-economic-environmental systems. In this contribution, we also highlight the importance of a pluralistic modeling approach and, in particular, the need for a fruitful interaction between quantitative and qualitative research approaches. newpara In an appendix we briefly summarize the concept of the FuturICT flagship project, which will build on and go beyond the proposals made by the Visioneer White Papers. EU flagships are ambitious multi-disciplinary high-risk projects with a duration of at least 10 years amounting to an envisaged overall budget of 1 billion EUR [10]. The goal of the FuturICT flagship initiative is to understand and manage complex, global, socially interactive systems, with a focus on sustainability and resilience.

  7. Characterization of the Structure and Function of the Normal Human Fovea Using Adaptive Optics Scanning Laser Ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Putnam, Nicole Marie

    In order to study the limits of spatial vision in normal human subjects, it is important to look at and near the fovea. The fovea is the specialized part of the retina, the light-sensitive multi-layered neural tissue that lines the inner surface of the human eye, where the cone photoreceptors are smallest (approximately 2.5 microns or 0.5 arcmin) and cone density reaches a peak. In addition, there is a 1:1 mapping from the photoreceptors to the brain in this central region of the retina. As a result, the best spatial sampling is achieved in the fovea and it is the retinal location used for acuity and spatial vision tasks. However, vision is typically limited by the blur induced by the normal optics of the eye and clinical tests of foveal vision and foveal imaging are both limited due to the blur. As a result, it is unclear what the perceptual benefit of extremely high cone density is. Cutting-edge imaging technology, specifically Adaptive Optics Scanning Laser Ophthalmoscopy (AOSLO), can be utilized to remove this blur, zoom in, and as a result visualize individual cone photoreceptors throughout the central fovea. This imaging combined with simultaneous image stabilization and targeted stimulus delivery expands our understanding of both the anatomical structure of the fovea on a microscopic scale and the placement of stimuli within this retinal area during visual tasks. The final step is to investigate the role of temporal variables in spatial vision tasks since the eye is in constant motion even during steady fixation. In order to learn more about the fovea, it becomes important to study the effect of this motion on spatial vision tasks. This dissertation steps through many of these considerations, starting with a model of the foveal cone mosaic imaged with AOSLO. We then use this high resolution imaging to compare anatomical and functional markers of the center of the normal human fovea. Finally, we investigate the role of natural and manipulated fixational eye movements in foveal vision, specifically looking at a motion detection task, contrast sensitivity, and image fading.

  8. Optical See-Through Head Mounted Display Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise

    NASA Technical Reports Server (NTRS)

    Axholt, Magnus; Skoglund, Martin; Peterson, Stephen D.; Cooper, Matthew D.; Schoen, Thomas B.; Gustafsson, Fredrik; Ynnerman, Anders; Ellis, Stephen R.

    2010-01-01

    Augmented Reality (AR) is a technique by which computer generated signals synthesize impressions that are made to coexist with the surrounding real world as perceived by the user. Human smell, taste, touch and hearing can all be augmented, but most commonly AR refers to the human vision being overlaid with information otherwise not readily available to the user. A correct calibration is important on an application level, ensuring that e.g. data labels are presented at correct locations, but also on a system level to enable display techniques such as stereoscopy to function properly [SOURCE]. Thus, vital to AR, calibration methodology is an important research area. While great achievements already have been made, there are some properties in current calibration methods for augmenting vision which do not translate from its traditional use in automated cameras calibration to its use with a human operator. This paper uses a Monte Carlo simulation of a standard direct linear transformation camera calibration to investigate how user introduced head orientation noise affects the parameter estimation during a calibration procedure of an optical see-through head mounted display.

  9. Understanding human visual systems and its impact on our intelligent instruments

    NASA Astrophysics Data System (ADS)

    Strojnik Scholl, Marija; Páez, Gonzalo; Scholl, Michelle K.

    2013-09-01

    We review the evolution of machine vision and comment on the cross-fertilization from the neural sciences onto flourishing fields of neural processing, parallel processing, and associative memory in optical sciences and computing. Then we examine how the intensive efforts in mapping the human brain have been influenced by concepts in computer sciences, control theory, and electronic circuits. We discuss two neural paths that employ the input from the vision sense to determine the navigational options and object recognition. They are ventral temporal pathway for object recognition (what?) and dorsal parietal pathway for navigation (where?), respectively. We describe the reflexive and conscious decision centers in cerebral cortex involved with visual attention and gaze control. Interestingly, these require return path though the midbrain for ocular muscle control. We find that the cognitive psychologists currently study human brain employing low-spatial-resolution fMRI with temporal response on the order of a second. In recent years, the life scientists have concentrated on insect brains to study neural processes. We discuss how reflexive and conscious gaze-control decisions are made in the frontal eye field and inferior parietal lobe, constituting the fronto-parietal attention network. We note that ethical and experiential learnings impact our conscious decisions.

  10. Characterization and Operation of Liquid Crystal Adaptive Optics Phoropter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Awwal, A; Bauman, B; Gavel, D

    2003-02-05

    Adaptive optics (AO), a mature technology developed for astronomy to compensate for the effects of atmospheric turbulence, can also be used to correct the aberrations of the eye. The classic phoropter is used by ophthalmologists and optometrists to estimate and correct the lower-order aberrations of the eye, defocus and astigmatism, in order to derive a vision correction prescription for their patients. An adaptive optics phoropter measures and corrects the aberrations in the human eye using adaptive optics techniques, which are capable of dealing with both the standard low-order aberrations and higher-order aberrations, including coma and spherical aberration. High-order aberrations havemore » been shown to degrade visual performance for clinical subjects in initial investigations. An adaptive optics phoropter has been designed and constructed based on a Shack-Hartmann sensor to measure the aberrations of the eye, and a liquid crystal spatial light modulator to compensate for them. This system should produce near diffraction-limited optical image quality at the retina, which will enable investigation of the psychophysical limits of human vision. This paper describes the characterization and operation of the AO phoropter with results from human subject testing.« less

  11. How virtual reality works: illusions of vision in "real" and virtual environments

    NASA Astrophysics Data System (ADS)

    Stark, Lawrence W.

    1995-04-01

    Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.

  12. Exploration Architecture Options - ECLSS, TCS, EVA Implications

    NASA Technical Reports Server (NTRS)

    Chambliss, Joe; Henninger, Don

    2011-01-01

    Many options for exploration of space have been identified and evaluated since the Vision for Space Exploration (VSE) was announced in 2004. The Augustine Commission evaluated human space flight for the Obama administration then the Human Exploration Framework Teams (HEFT and HEFT2) evaluated potential exploration missions and the infrastructure and technology needs for those missions. Lunar architectures have been identified and addressed by the Lunar Surface Systems team to establish options for how to get to, and then inhabit and explore, the moon. This paper will evaluate the options for exploration of space for the implications of architectures on the Environmental Control and Life Support (ECLSS), Thermal Control (TCS), and Extravehicular Activity (EVA) Systems.

  13. Lightweight Nonmetallic Thermal Protection Materials Technology

    NASA Technical Reports Server (NTRS)

    Valentine, Peter G.; Lawrence, Timothy W.; Gubert, Michael K.; Milos, Frank S.; Levine, Stanley R.; Ohlhorst, Craig W.; Koenig, John R.

    2005-01-01

    To fulfill President George W. Bush's "Vision for Space Exploration" (2004) - successful human and robotic missions to and from other solar system bodies in order to explore their atmospheres and surfaces - the National Aeronautics and Space Administration (NASA) must reduce the trip time, cost, and vehicle weight so that the payload and scientific experiments' capabilities can be maximized. The new project described in this paper will generate thermal protection system (TPS) product that will enable greater fidelity in mission/vehicle design trade studies, support risk reduction for material selections, assist in the optimization of vehicle weights, and provide materials and processes templates for use in the development of human-rated TPS qualification and certification plans.

  14. Conference on Intelligent Robotics in Field, Factory, Service and Space (CIRFFSS 1994), Volume 2

    NASA Technical Reports Server (NTRS)

    Erickson, Jon D. (Editor)

    1994-01-01

    The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservations can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed the following topics: (1) vision systems integration and architecture; (2) selective perception and human robot interaction; (3) robotic systems technology; (4) military and other field applications; (5) dual-use precommercial robotic technology; (6) building operations; (7) planetary exploration applications; (8) planning; (9) new directions in robotics; and (10) commercialization.

  15. Application of aircraft navigation sensors to enhanced vision systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.

    1993-01-01

    In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.

  16. Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.

    PubMed

    Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer

    2015-11-01

    Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.

  17. A Feasibility Study of View-independent Gait Identification

    DTIC Science & Technology

    2012-03-01

    ice skates . For walking, the footprint records for single pixels form clusters that are well separated in space and time. (Any overlap of contact...Pattern Recognition 2007, 1-8. Cheng M-H, Ho M-F & Huang C-L (2008), "Gait Analysis for Human Identification Through Manifold Learning and HMM... Learning and Cybernetics 2005, 4516-4521 Moeslund T B & Granum E (2001), "A Survey of Computer Vision-Based Human Motion Capture", Computer Vision

  18. FLORA™: Phase I development of a functional vision assessment for prosthetic vision users

    PubMed Central

    Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy

    2014-01-01

    Background Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population in order to evaluate the impact of new vision restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional vision ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. Methods The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional vision tasks for observation of performance, and a case narrative summary. Results were analyzed to determine whether the interview questions and functional vision tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, twenty-six subjects were assessed with the FLORA. Seven different evaluators administered the assessment. Results All 14 interview questions were asked. All 35 functional vision tasks were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options -- impossible (33%), difficult (23%), moderate (24%) and easy (19%) -- were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with vision only occurring 75% on average with the System ON, and 29% with the System OFF. Conclusion The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as a functional vision and well-being assessment tool. PMID:25675964

  19. Contextualising and Analysing Planetary Rover Image Products through the Web-Based PRoGIS

    NASA Astrophysics Data System (ADS)

    Morley, Jeremy; Sprinks, James; Muller, Jan-Peter; Tao, Yu; Paar, Gerhard; Huber, Ben; Bauer, Arnold; Willner, Konrad; Traxler, Christoph; Garov, Andrey; Karachevtseva, Irina

    2014-05-01

    The international planetary science community has launched, landed and operated dozens of human and robotic missions to the planets and the Moon. They have collected various surface imagery that has only been partially utilized for further scientific purposes. The FP7 project PRoViDE (Planetary Robotics Vision Data Exploitation) is assembling a major portion of the imaging data gathered so far from planetary surface missions into a unique database, bringing them into a spatial context and providing access to a complete set of 3D vision products. Processing is complemented by a multi-resolution visualization engine that combines various levels of detail for a seamless and immersive real-time access to dynamically rendered 3D scenes. PRoViDE aims to (1) complete relevant 3D vision processing of planetary surface missions, such as Surveyor, Viking, Pathfinder, MER, MSL, Phoenix, Huygens, and Lunar ground-level imagery from Apollo, Russian Lunokhod and selected Luna missions, (2) provide highest resolution & accuracy remote sensing (orbital) vision data processing results for these sites to embed the robotic imagery and its products into spatial planetary context, (3) collect 3D Vision processing and remote sensing products within a single coherent spatial data base, (4) realise seamless fusion between orbital and ground vision data, (5) demonstrate the potential of planetary surface vision data by maximising image quality visualisation in 3D publishing platform, (6) collect and formulate use cases for novel scientific application scenarios exploiting the newly introduced spatial relationships and presentation, (7) demonstrate the concepts for MSL, (9) realize on-line dissemination of key data & its presentation by a web-based GIS and rendering tool named PRoGIS (Planetary Robotics GIS). PRoGIS is designed to give access to rover image archives in geographical context, using projected image view cones, obtained from existing meta-data and updated according to processing results, as a means to interact with and explore the archive. However PRoGIS is more than a source data explorer. It is linked to the PRoVIP (Planetary Robotics Vision Image Processing) system which includes photogrammetric processing tools to extract terrain models, compose panoramas, and explore and exploit multi-view stereo (where features on the surface have been imaged from different rover stops). We have started with the Opportunity MER rover as our test mission but the system is being designed to be multi-mission, taking advantage in particular of UCL MSSL's PDS mirror, and we intend to at least deal with both MER rovers and MSL. For the period of ProViDE until end of 2015 the further intent is to handle lunar and other Martian rover & descent camera data. The presentation discusses the challenges of integrating rover and orbital derived data into a single geographical framework, especially reconstructing view cones; our human-computer interaction intentions in creating an interface to the rover data that is accessible to planetary scientists; how we handle multi-mission data in the database; and a demonstration of the resulting system & its processing capabilities. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 PRoViDE.

  20. Night vision: changing the way we drive

    NASA Astrophysics Data System (ADS)

    Klapper, Stuart H.; Kyle, Robert J. S.; Nicklin, Robert L.; Kormos, Alexander L.

    2001-03-01

    A revolutionary new Night Vision System has been designed to help drivers see well beyond their headlights. From luxury automobiles to heavy trucks, Night Vision is helping drivers see better, see further, and react sooner. This paper describes how Night Vision Systems are being used in transportation and their viability for the future. It describes recent improvements to the system currently in the second year of production. It also addresses consumer education and awareness, cost reduction, product reliability, market expansion and future improvements.

Top