Sample records for simulation vision model

  1. A physiologically-based model for simulation of color vision deficiency.

    PubMed

    Machado, Gustavo M; Oliveira, Manuel M; Fernandes, Leandro A F

    2009-01-01

    Color vision deficiency (CVD) affects approximately 200 million people worldwide, compromising the ability of these individuals to effectively perform color and visualization-related tasks. This has a significant impact on their private and professional lives. We present a physiologically-based model for simulating color vision. Our model is based on the stage theory of human color vision and is derived from data reported in electrophysiological studies. It is the first model to consistently handle normal color vision, anomalous trichromacy, and dichromacy in a unified way. We have validated the proposed model through an experimental evaluation involving groups of color vision deficient individuals and normal color vision ones. Our model can provide insights and feedback on how to improve visualization experiences for individuals with CVD. It also provides a framework for testing hypotheses about some aspects of the retinal photoreceptors in color vision deficient individuals.

  2. Robotic space simulation integration of vision algorithms into an orbital operations simulation

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.

    1987-01-01

    In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.

  3. Vision rehabilitation in the case of blindness.

    PubMed

    Veraart, Claude; Duret, Florence; Brelén, Marten; Oozeer, Medhy; Delbeke, Jean

    2004-09-01

    This article examines the various vision rehabilitation procedures that are available for early and late blindness. Depending on the pathology involved, several vision rehabilitation procedures exist, or are in development. Visual aids are available for low vision individuals, as are sensory aids for blind persons. Most noninvasive sensory substitution prostheses as well as implanted visual prostheses in development are reviewed. Issues dealing with vision rehabilitation are also discussed, such as problems of biocompatibility, electrical safety, psychosocial aspects, and ethics. Basic studies devoted to vision rehabilitation such as simulation in mathematical models and simulation of artificial vision are also presented. Finally, the importance of accurate rehabilitation assessment is addressed, and tentative market figures are given.

  4. Sensitivity of diabetic retinopathy associated vision loss to screening interval in an agent-based/discrete event simulation model.

    PubMed

    Day, T Eugene; Ravi, Nathan; Xian, Hong; Brugh, Ann

    2014-04-01

    To examine the effect of changes to screening interval on the incidence of vision loss in a simulated cohort of Veterans with diabetic retinopathy (DR). This simulation allows us to examine potential interventions without putting patients at risk. Simulated randomized controlled trial. We develop a hybrid agent-based/discrete event simulation which incorporates a population of simulated Veterans--using abstracted data from a retrospective cohort of real-world diabetic Veterans--with a discrete event simulation (DES) eye clinic at which it seeks treatment for DR. We compare vision loss under varying screening policies, in a simulated population of 5000 Veterans over 50 independent ten-year simulation runs for each group. Diabetic Retinopathy associated vision loss increased as the screening interval was extended from one to five years (p<0.0001). This increase was concentrated in the third year of the screening interval (p<0.01). There was no increase in vision loss associated with increasing the screening interval from one year to two years (p=0.98). Increasing the screening interval for diabetic patients who have not yet developed diabetic retinopathy from 1 to 2 years appears safe, while increasing the interval to 3 years heightens risk for vision loss. Published by Elsevier Ltd.

  5. Fast ray-tracing of human eye optics on Graphics Processing Units.

    PubMed

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Defense Simulation Internet: next generation information highway.

    PubMed

    Lilienthal, M G

    1995-06-01

    The Department of Defense has been engaged in the Defense Modeling and Simulation Initiative (DMSI) to provide advanced distributed simulation warfighters in geographically distributed localities. Lessons learned from the Defense Simulation Internet (DSI) concerning architecture, standards, protocols, interoperability, information sharing, and distributed data bases are equally applicable to telemedicine. Much of the vision and objectives of the DMSI are easily translated into the vision for world wide telemedicine.

  7. Application of digital human modeling and simulation for vision analysis of pilots in a jet aircraft: a case study.

    PubMed

    Karmakar, Sougata; Pal, Madhu Sudan; Majumdar, Deepti; Majumdar, Dhurjati

    2012-01-01

    Ergonomic evaluation of visual demands becomes crucial for the operators/users when rapid decision making is needed under extreme time constraint like navigation task of jet aircraft. Research reported here comprises ergonomic evaluation of pilot's vision in a jet aircraft in virtual environment to demonstrate how vision analysis tools of digital human modeling software can be used effectively for such study. Three (03) dynamic digital pilot models, representative of smallest, average and largest Indian pilot population were generated from anthropometric database and interfaced with digital prototype of the cockpit in Jack software for analysis of vision within and outside the cockpit. Vision analysis tools like view cones, eye view windows, blind spot area, obscuration zone, reflection zone etc. were employed during evaluation of visual fields. Vision analysis tool was also used for studying kinematic changes of pilot's body joints during simulated gazing activity. From present study, it can be concluded that vision analysis tool of digital human modeling software was found very effective in evaluation of position and alignment of different displays and controls in the workstation based upon their priorities within the visual fields and anthropometry of the targeted users, long before the development of its physical prototype.

  8. Multiscale Issues and Simulation-Based Science and Engineering for Materials-by-Design

    DTIC Science & Technology

    2010-05-15

    planning and execution of programs to achieve the vision of ? material -by-design?. A key part of this effort has been to examine modeling at the mesoscale...15. SUBJECT TERMS Modelling & Simulation, Materials Design 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18...planning and execution of programs to achieve the vision of “ material -by-design”. A key part of this effort has been to examine modeling at the mesoscale. A

  9. Hierarchical Modelling Of Mobile, Seeing Robots

    NASA Astrophysics Data System (ADS)

    Luh, Cheng-Jye; Zeigler, Bernard P.

    1990-03-01

    This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.

  10. Hierarchical modelling of mobile, seeing robots

    NASA Technical Reports Server (NTRS)

    Luh, Cheng-Jye; Zeigler, Bernard P.

    1990-01-01

    This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.

  11. Enhanced modeling and simulation of EO/IR sensor systems

    NASA Astrophysics Data System (ADS)

    Hixson, Jonathan G.; Miller, Brian; May, Christopher

    2015-05-01

    The testing and evaluation process developed by the Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) provides end to end systems evaluation, testing, and training of EO/IR sensors. By combining NV-LabCap, the Night Vision Integrated Performance Model (NV-IPM), One Semi-Automated Forces (OneSAF) input sensor file generation, and the Night Vision Image Generator (NVIG) capabilities, NVESD provides confidence to the M&S community that EO/IR sensor developmental and operational testing and evaluation are accurately represented throughout the lifecycle of an EO/IR system. This new process allows for both theoretical and actual sensor testing. A sensor can be theoretically designed in NV-IPM, modeled in NV-IPM, and then seamlessly input into the wargames for operational analysis. After theoretical design, prototype sensors can be measured by using NV-LabCap, then modeled in NV-IPM and input into wargames for further evaluation. The measurement process to high fidelity modeling and simulation can then be repeated again and again throughout the entire life cycle of an EO/IR sensor as needed, to include LRIP, full rate production, and even after Depot Level Maintenance. This is a prototypical example of how an engineering level model and higher level simulations can share models to mutual benefit.

  12. Modeling and Simulation of Microelectrode-Retina Interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckerman, M

    2002-11-30

    The goal of the retinal prosthesis project is the development of an implantable microelectrode array that can be used to supply visually-driven electrical input to cells in the retina, bypassing nonfunctional rod and cone cells, thereby restoring vision to blind individuals. This goal will be achieved through the study of the fundamentals of electrical engineering, vision research, and biomedical engineering with the aim of acquiring the knowledge needed to engineer a high-density microelectrode-tissue hybrid sensor that will restore vision to millions of blind persons. The modeling and simulation task within this project is intended to address the question how bestmore » to stimulate, and communicate with, cells in the retina using implanted microelectrodes.« less

  13. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  14. [A simulation study with finite element model on the unequal loss of peripheral vision caused by acceleration].

    PubMed

    Geng, Xiaoqi; Liu, Xiaoyu; Liu, Songyang; Xu, Yan; Zhao, Xianliang; Wang, Jie; Fan, Yubo

    2017-04-01

    An unequal loss of peripheral vision may happen with high sustaining multi-axis acceleration, leading to a great potential flight safety hazard. In the present research, finite element method was used to study the mechanism of unequal loss of peripheral vision. Firstly, a 3D geometric model of skull was developed based on the adult computer tomography (CT) images. The model of double eyes was created by mirroring with the previous right eye model. Then, the double-eye model was matched to the skull model, and fat was filled between eyeballs and skull. Acceleration loads of head-to-foot (G z ), right-to-left (G y ), chest-to-back (G x ) and multi-axis directions were applied to the current model to simulate dynamic response of retina by explicit dynamics solution. The results showed that the relative strain of double eyes was 25.7% under multi-axis acceleration load. Moreover, the strain distributions showed a significant difference among acceleration loaded in different directions. It indicated that a finite element model of double eyes was an effective means to study the mechanism of an unequal loss of peripheral vision at sustaining high multi-axis acceleration.

  15. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  16. Visual acuity estimation from simulated images

    NASA Astrophysics Data System (ADS)

    Duncan, William J.

    Simulated images can provide insight into the performance of optical systems, especially those with complicated features. Many modern solutions for presbyopia and cataracts feature sophisticated power geometries or diffractive elements. Some intraocular lenses (IOLs) arrive at multifocality through the use of a diffractive surface and multifocal contact lenses have a radially varying power profile. These type of elements induce simultaneous vision as well as affecting vision much differently than a monofocal ophthalmic appliance. With myriad multifocal ophthalmics available on the market it is difficult to compare or assess performance in ways that effect wearers of such appliances. Here we present software and algorithmic metrics that can be used to qualitatively and quantitatively compare ophthalmic element performance, with specific examples of bifocal intraocular lenses (IOLs) and multifocal contact lenses. We anticipate this study, methods, and results to serve as a starting point for more complex models of vision and visual acuity in a setting where modeling is advantageous. Generating simulated images of real- scene scenarios is useful for patients in assessing vision quality with a certain appliance. Visual acuity estimation can serve as an important tool for manufacturing and design of ophthalmic appliances.

  17. Modeling and simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanham, R.; Vogt, W.G.; Mickle, M.H.

    1986-01-01

    This book presents the papers given at a conference on computerized simulation. Topics considered at the conference included expert systems, modeling in electric power systems, power systems operating strategies, energy analysis, a linear programming approach to optimum load shedding in transmission systems, econometrics, simulation in natural gas engineering, solar energy studies, artificial intelligence, vision systems, hydrology, multiprocessors, and flow models.

  18. Individual vision and peak distribution in collective actions

    NASA Astrophysics Data System (ADS)

    Lu, Peng

    2017-06-01

    People make decisions on whether they should participate as participants or not as free riders in collective actions with heterogeneous visions. Besides of the utility heterogeneity and cost heterogeneity, this work includes and investigates the effect of vision heterogeneity by constructing a decision model, i.e. the revised peak model of participants. In this model, potential participants make decisions under the joint influence of utility, cost, and vision heterogeneities. The outcomes of simulations indicate that vision heterogeneity reduces the values of peaks, and the relative variance of peaks is stable. Under normal distributions of vision heterogeneity and other factors, the peaks of participants are normally distributed as well. Therefore, it is necessary to predict distribution traits of peaks based on distribution traits of related factors such as vision heterogeneity and so on. We predict the distribution of peaks with parameters of both mean and standard deviation, which provides the confident intervals and robust predictions of peaks. Besides, we validate the peak model of via the Yuyuan Incident, a real case in China (2014), and the model works well in explaining the dynamics and predicting the peak of real case.

  19. Vision Algorithms to Determine Shape and Distance for Manipulation of Unmodeled Objects

    NASA Technical Reports Server (NTRS)

    Montes, Leticia; Bowers, David; Lumia, Ron

    1998-01-01

    This paper discusses the development of a robotic system for general use in an unstructured environment. This is illustrated through pick and place of randomly positioned, un-modeled objects. There are many applications for this project, including rock collection for the Mars Surveyor Program. This system is demonstrated with a Puma560 robot, Barrett hand, Cognex vision system, and Cimetrix simulation and control, all running on a PC. The demonstration consists of two processes: vision system and robotics. The vision system determines the size and location of the unknown objects. The robotics part consists of moving the robot to the object, configuring the hand based on the information from the vision system, then performing the pick/place operation. This work enhances and is a part of the Low Cost Virtual Collaborative Environment which provides remote simulation and control of equipment.

  20. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  1. Accounting for standard errors of vision-specific latent trait in regression models.

    PubMed

    Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

    2014-07-11

    To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  2. Perceptual learning in a non-human primate model of artificial vision

    PubMed Central

    Killian, Nathaniel J.; Vurro, Milena; Keith, Sarah B.; Kyada, Margee J.; Pezaris, John S.

    2016-01-01

    Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation. PMID:27874058

  3. The Hunter-Killer Model, Version 2.0. User’s Manual.

    DTIC Science & Technology

    1986-12-01

    Contract No. DAAK21-85-C-0058 Prepared for The Center for Night Vision and Electro - Optics DELNV-V Fort Belvoir, Virginia 22060 This document has been...INQUIRIES Inquiries concerning the Hunter-Killer Model or the Hunter-Killer Database System should be addressed to: 1-1 I The Night Vision and Electro - Optics Center...is designed and constructed to study the performance of electro - optic sensor systems in a combat scenario. The model simulates a two-sided battle

  4. Landmark navigation and autonomous landing approach with obstacle detection for aircraft

    NASA Astrophysics Data System (ADS)

    Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.

    1997-06-01

    A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.

  5. A lightweight, inexpensive robotic system for insect vision.

    PubMed

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob J. Jacobson; Robert F. Jeffers; Gretchen E. Matthern

    2009-08-01

    The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R&D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating “what if” scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level for U.S. nuclear power. The model is not intendedmore » as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., “reactor types” not individual reactors and “separation types” not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation of disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. Note that recovered uranium is itself often partitioned: some RU flows with recycled transuranic elements, some flows with wastes, and the rest is designated RU. RU comes out of storage if needed to correct the U/TRU ratio in new recycled fuel. Neither RU nor DU are designated as wastes. VISION is comprised of several Microsoft Excel input files, a Powersim Studio core, and several Microsoft Excel output files. All must be co-located in the same folder on a PC to function. We use Microsoft Excel 2003 and have not tested VISION with Microsoft Excel 2007. The VISION team uses both Powersim Studio 2005 and 2009 and it should work with either.« less

  7. Review On Applications Of Neural Network To Computer Vision

    NASA Astrophysics Data System (ADS)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  8. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  9. Low, slow, small target recognition based on spatial vision network

    NASA Astrophysics Data System (ADS)

    Cheng, Zhao; Guo, Pei; Qi, Xin

    2018-03-01

    Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.

  10. An early underwater artificial vision model in ocean investigations via independent component analysis.

    PubMed

    Nian, Rui; Liu, Fang; He, Bo

    2013-07-16

    Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs).

  11. An Early Underwater Artificial Vision Model in Ocean Investigations via Independent Component Analysis

    PubMed Central

    Nian, Rui; Liu, Fang; He, Bo

    2013-01-01

    Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs). PMID:23863855

  12. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  13. Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems

    NASA Technical Reports Server (NTRS)

    Liu, Xuan; Furrer, David; Kosters, Jared; Holmes, Jack

    2018-01-01

    Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost-effective, rapid, and revolutionary design of fit-for-purpose materials, components, and systems. Although the vision focused on aeronautics and space applications, it is believed that other engineering communities (e.g., automotive, biomedical, etc.) can benefit as well from the proposed framework with only minor modifications. Finally, it is TTT's hope and desire that this vision provides the strategic guidance to both public and private research and development decision makers to make the proposed 2040 vision state a reality and thereby provide a significant advancement in the United States global competitiveness.

  14. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  15. Physics-based simulations of aerial attacks by peregrine falcons reveal that stooping at high speed maximizes catch success against agile prey.

    PubMed

    Mills, Robin; Hildenbrandt, Hanno; Taylor, Graham K; Hemelrijk, Charlotte K

    2018-04-01

    The peregrine falcon Falco peregrinus is renowned for attacking its prey from high altitude in a fast controlled dive called a stoop. Many other raptors employ a similar mode of attack, but the functional benefits of stooping remain obscure. Here we investigate whether, when, and why stooping promotes catch success, using a three-dimensional, agent-based modeling approach to simulate attacks of falcons on aerial prey. We simulate avian flapping and gliding flight using an analytical quasi-steady model of the aerodynamic forces and moments, parametrized by empirical measurements of flight morphology. The model-birds' flight control inputs are commanded by their guidance system, comprising a phenomenological model of its vision, guidance, and control. To intercept its prey, model-falcons use the same guidance law as missiles (pure proportional navigation); this assumption is corroborated by empirical data on peregrine falcons hunting lures. We parametrically vary the falcon's starting position relative to its prey, together with the feedback gain of its guidance loop, under differing assumptions regarding its errors and delay in vision and control, and for three different patterns of prey motion. We find that, when the prey maneuvers erratically, high-altitude stoops increase catch success compared to low-altitude attacks, but only if the falcon's guidance law is appropriately tuned, and only given a high degree of precision in vision and control. Remarkably, the optimal tuning of the guidance law in our simulations coincides closely with what has been observed empirically in peregrines. High-altitude stoops are shown to be beneficial because their high airspeed enables production of higher aerodynamic forces for maneuvering, and facilitates higher roll agility as the wings are tucked, each of which is essential to catching maneuvering prey at realistic response delays.

  16. Laparoscopic lens fogging: solving a common surgical problem in standard and robotic laparoscopes via a scientific model.

    PubMed

    Manning, Todd G; Papa, Nathan; Perera, Marlon; McGrath, Shannon; Christidis, Daniel; Khan, Munad; O'Beirne, Richard; Campbell, Nicholas; Bolton, Damien; Lawrentschuk, Nathan

    2018-03-01

    Laparoscopic lens fogging (LLF) hampers vision and impedes operative efficiency. Attempts to reduce LLF have led to the development of various anti-fogging fluids and warming devices. Limited literature exists directly comparing these techniques. We constructed a model peritoneum to simulate LLF and to compare the efficacy of various anti-fogging techniques. Intraperitoneal space was simulated using a suction bag suspended within an 8 L container of water. LLF was induced by varying the temperature and humidity within the model peritoneum. Various anti-fogging techniques were assessed including scope warmers, FRED TM , Resoclear TM , chlorhexidine, betadine and immersion in heated saline. These products were trialled with and without the use of a disposable scope warmer. Vision scores were evaluated by the same investigator for all tests and rated according to a predetermined scale. Fogging was assessed for each product or technique 30 times and a mean vision rating was recorded. All products tested imparted some benefit, but FRED TM performed better than all other techniques. Betadine and Resoclear TM performed no better than the use of a scope warmer alone. Immersion in saline prior to insertion resulted in decreased vision ratings. The robotic scope did not result in LLF within the model. In standard laparoscopes, the most superior preventative measure was FRED TM utilised on a pre-warmed scope. Despite improvements in LLF with other products FRED TM was better than all other techniques. The robotic laparoscope performed superiorly regarding LLF compared to standard laparoscope.

  17. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  18. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  19. Vision system and three-dimensional modeling techniques for quantification of the morphology of irregular particles

    NASA Astrophysics Data System (ADS)

    Smith, Lyndon N.; Smith, Melvyn L.

    2000-10-01

    Particulate materials undergo processing in many industries, and therefore there are significant commercial motivators for attaining improvements in the flow and packing behavior of powders. This can be achieved by modeling the effects of particle size, friction, and most importantly, particle shape or morphology. The method presented here for simulating powders employs a random number generator to construct a model of a random particle by combining a sphere with a number of smaller spheres. The resulting 3D model particle has a nodular type of morphology, which is similar to that exhibited by the atomized powders that are used in the bulk of powder metallurgy (PM) manufacture. The irregularity of the model particles is dependent upon vision system data gathered from microscopic analysis of real powder particles. A methodology is proposed whereby randomly generated model particles of various sized and irregularities can be combined in a random packing simulation. The proposed Monte Carlo technique would allow incorporation of the effects of gravity, wall friction, and inter-particle friction. The improvements in simulation realism that this method is expected to provide would prove useful for controlling powder production, and for predicting die fill behavior during the production of PM parts.

  20. An Undergraduate Laboratory Activity on Molecular Dynamics Simulations

    ERIC Educational Resources Information Center

    Spitznagel, Benjamin; Pritchett, Paige R.; Messina, Troy C.; Goadrich, Mark; Rodriguez, Juan

    2016-01-01

    Vision and Change [AAAS, 2011] outlines a blueprint for modernizing biology education by addressing conceptual understanding of key concepts, such as the relationship between structure and function. The document also highlights skills necessary for student success in 21st century Biology, such as the use of modeling and simulation. Here we…

  1. Quasi-eccentricity error modeling and compensation in vision metrology

    NASA Astrophysics Data System (ADS)

    Shen, Yijun; Zhang, Xu; Cheng, Wei; Zhu, Limin

    2018-04-01

    Circular targets are commonly used in vision applications for its detection accuracy and robustness. The eccentricity error of the circular target caused by perspective projection is one of the main factors of measurement error which needs to be compensated in high-accuracy measurement. In this study, the impact of the lens distortion on the eccentricity error is comprehensively investigated. The traditional eccentricity error turns to a quasi-eccentricity error in the non-linear camera model. The quasi-eccentricity error model is established by comparing the quasi-center of the distorted ellipse with the true projection of the object circle center. Then, an eccentricity error compensation framework is proposed which compensates the error by iteratively refining the image point to the true projection of the circle center. Both simulation and real experiment confirm the effectiveness of the proposed method in several vision applications.

  2. Collaboration between human and nonhuman players in Night Vision Tactical Trainer-Shadow

    NASA Astrophysics Data System (ADS)

    Berglie, Stephen T.; Gallogly, James J.

    2016-05-01

    The Night Vision Tactical Trainer - Shadow (NVTT-S) is a U.S. Army-developed training tool designed to improve critical Manned-Unmanned Teaming (MUMT) communication skills for payload operators in Unmanned Aerial Sensor (UAS) crews. The trainer is composed of several Government Off-The-Shelf (GOTS) simulation components and takes the trainee through a series of escalating engagements using tactically relevant, realistically complex, scenarios involving a variety of manned, unmanned, aerial, and ground-based assets. The trainee is the only human player in the game and he must collaborate, from his web-based mock operating station, with various non-human players via spoken natural language over simulated radio in order to execute the training missions successfully. Non-human players are modeled in two complementary layers - OneSAF provides basic background behaviors for entities while NVTT provides higher level models that control entity actions based on intent extracted from the trainee's spoken natural dialog with game entities. Dialog structure is modeled based on Army standards for communication and verbal protocols. This paper presents an architecture that integrates the U.S. Army's Night Vision Image Generator (NVIG), One Semi- Automated Forces (OneSAF), a flight dynamics model, as well as Commercial Off The Shelf (COTS) speech recognition and text to speech products to effect an environment with sufficient entity counts and fidelity to enable meaningful teaching and reinforcement of critical communication skills. It further demonstrates the model dynamics and synchronization mechanisms employed to execute purpose-built training scenarios, and to achieve ad-hoc collaboration on-the-fly between human and non-human players in the simulated environment.

  3. Research on moving object detection based on frog's eyes

    NASA Astrophysics Data System (ADS)

    Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan

    2008-12-01

    On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.

  4. Coupling sensing to crop models for closed-loop plant production in advanced life support systems

    NASA Astrophysics Data System (ADS)

    Cavazzoni, James; Ling, Peter P.

    1999-01-01

    We present a conceptual framework for coupling sensing to crop models for closed-loop analysis of plant production for NASA's program in advanced life support. Crop status may be monitored through non-destructive observations, while models may be independently applied to crop production planning and decision support. To achieve coupling, environmental variables and observations are linked to mode inputs and outputs, and monitoring results compared with model predictions of plant growth and development. The information thus provided may be useful in diagnosing problems with the plant growth system, or as a feedback to the model for evaluation of plant scheduling and potential yield. In this paper, we demonstrate this coupling using machine vision sensing of canopy height and top projected canopy area, and the CROPGRO crop growth model. Model simulations and scenarios are used for illustration. We also compare model predictions of the machine vision variables with data from soybean experiments conducted at New Jersey Agriculture Experiment Station Horticulture Greenhouse Facility, Rutgers University. Model simulations produce reasonable agreement with the available data, supporting our illustration.

  5. Development of a Spot-Application Tool for Rapid, High-Resolution Simulation of Wave-Driven Nearshore Hydrodynamics

    DTIC Science & Technology

    2013-09-30

    flow models, such as Delft3D, with our developed Boussinesq -type model. The vision of this project is to develop an operational tool for the...situ measurements or large-scale wave models. This information will be used to drive the offshore wave boundary condition. • Execute the Boussinesq ...model to match with the Boussinesq -type theory would be one which can simulate sheared and stratified currents due to large-scale (non-wave) forcings

  6. A multiscale Markov random field model in wavelet domain for image segmentation

    NASA Astrophysics Data System (ADS)

    Dai, Peng; Cheng, Yu; Wang, Shengchun; Du, Xinyu; Wu, Dan

    2017-07-01

    The human vision system has abilities for feature detection, learning and selective attention with some properties of hierarchy and bidirectional connection in the form of neural population. In this paper, a multiscale Markov random field model in the wavelet domain is proposed by mimicking some image processing functions of vision system. For an input scene, our model provides its sparse representations using wavelet transforms and extracts its topological organization using MRF. In addition, the hierarchy property of vision system is simulated using a pyramid framework in our model. There are two information flows in our model, i.e., a bottom-up procedure to extract input features and a top-down procedure to provide feedback controls. The two procedures are controlled simply by two pyramidal parameters, and some Gestalt laws are also integrated implicitly. Equipped with such biological inspired properties, our model can be used to accomplish different image segmentation tasks, such as edge detection and region segmentation.

  7. Juno Mission Simulation

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Weidner, Richard J.

    2008-01-01

    The Juno spacecraft is planned to launch in August of 2012 and would arrive at Jupiter four years later. The spacecraft would spend more than one year orbiting the planet and investigating the existence of an ice-rock core; determining the amount of global water and ammonia present in the atmosphere, studying convection and deep- wind profiles in the atmosphere; investigating the origin of the Jovian magnetic field, and exploring the polar magnetosphere. Juno mission management is responsible for mission and navigation design, mission operation planning, and ground-data-system development. In order to ensure successful mission management from initial checkout to final de-orbit, it is critical to share a common vision of the entire mission operation phases with the rest of the project teams. Two major challenges are 1) how to develop a shared vision that can be appreciated by all of the project teams of diverse disciplines and expertise, and 2) how to continuously evolve a shared vision as the project lifecycle progresses from formulation phase to operation phase. The Juno mission simulation team addresses these challenges by developing agile and progressive mission models, operation simulations, and real-time visualization products. This paper presents mission simulation visualization network (MSVN) technology that has enabled a comprehensive mission simulation suite (MSVN-Juno) for the Juno project.

  8. A modeled economic analysis of a digital tele-ophthalmology system as used by three federal health care agencies for detecting proliferative diabetic retinopathy.

    PubMed

    Whited, John D; Datta, Santanu K; Aiello, Lloyd M; Aiello, Lloyd P; Cavallerano, Jerry D; Conlin, Paul R; Horton, Mark B; Vigersky, Robert A; Poropatich, Ronald K; Challa, Pratap; Darkins, Adam W; Bursell, Sven-Erik

    2005-12-01

    The objective of this study was to compare, using a 12-month time frame, the cost-effectiveness of a non-mydriatic digital tele-ophthalmology system (Joslin Vision Network) versus traditional clinic-based ophthalmoscopy examinations with pupil dilation to detect proliferative diabetic retinopathy and its consequences. Decision analysis techniques, including Monte Carlo simulation, were used to model the use of the Joslin Vision Network versus conventional clinic-based ophthalmoscopy among the entire diabetic populations served by the Indian Health Service, the Department of Veterans Affairs, and the active duty Department of Defense. The economic perspective analyzed was that of each federal agency. Data sources for costs and outcomes included the published literature, epidemiologic data, administrative data, market prices, and expert opinion. Outcome measures included the number of true positive cases of proliferative diabetic retinopathy detected, the number of patients treated with panretinal laser photocoagulation, and the number of cases of severe vision loss averted. In the base-case analyses, the Joslin Vision Network was the dominant strategy in all but two of the nine modeled scenarios, meaning that it was both less costly and more effective. In the active duty Department of Defense population, the Joslin Vision Network would be more effective but cost an extra 1,618 dollars per additional patient treated with panretinal laser photo-coagulation and an additional 13,748 dollars per severe vision loss event averted. Based on our economic model, the Joslin Vision Network has the potential to be more effective than clinic-based ophthalmoscopy for detecting proliferative diabetic retinopathy and averting cases of severe vision loss, and may do so at lower cost.

  9. Design and evaluation of an autonomous, obstacle avoiding, flight control system using visual sensors

    NASA Astrophysics Data System (ADS)

    Crawford, Bobby Grant

    In an effort to field smaller and cheaper Uninhabited Aerial Vehicles (UAVs), the Army has expressed an interest in an ability of the vehicle to autonomously detect and avoid obstacles. Current systems are not suitable for small aircraft. NASA Langley Research Center has developed a vision sensing system that uses small semiconductor cameras. The feasibility of using this sensor for the purpose of autonomous obstacle avoidance by a UAV is the focus of the research presented in this document. The vision sensor characteristics are modeled and incorporated into guidance and control algorithms designed to generate flight commands based on obstacle information received from the sensor. The system is evaluated by simulating the response to these flight commands using a six degree-of-freedom, non-linear simulation of a small, fixed wing UAV. The simulation is written using the MATLAB application and runs on a PC. Simulations were conducted to test the longitudinal and lateral capabilities of the flight control for a range of airspeeds, camera characteristics, and wind speeds. Results indicate that the control system is suitable for obstacle avoiding flight control using the simulated vision system. In addition, a method for designing and evaluating the performance of such a system has been developed that allows the user to easily change component characteristics and evaluate new systems through simulation.

  10. Toothguide Trainer tests with color vision deficiency simulation monitor.

    PubMed

    Borbély, Judit; Varsányi, Balázs; Fejérdy, Pál; Hermann, Péter; Jakstat, Holger A

    2010-01-01

    The aim of this study was to evaluate whether simulated severe red and green color vision deficiency (CVD) influenced color matching results and to investigate whether training with Toothguide Trainer (TT) computer program enabled better color matching results. A total of 31 color normal dental students participated in the study. Every participant had to pass the Ishihara Test. Participants with a red/green color vision deficiency were excluded. A lecture on tooth color matching was given, and individual training with TT was performed. To measure the individual tooth color matching results in normal and color deficient display modes, the TT final exam was displayed on a calibrated monitor that served as a hardware-based method of simulating protanopy and deuteranopy. Data from the TT final exams were collected in normal and in severe red and green CVD-simulating monitor display modes. Color difference values for each participant in each display mode were computed (∑ΔE(ab)(*)), and the respective means and standard deviations were calculated. The Student's t-test was used in statistical evaluation. Participants made larger ΔE(ab)(*) errors in severe color vision deficient display modes than in the normal monitor mode. TT tests showed significant (p<0.05) difference in the tooth color matching results of severe green color vision deficiency simulation mode compared to normal vision mode. Students' shade matching results were significantly better after training (p=0.009). Computer-simulated severe color vision deficiency mode resulted in significantly worse color matching quality compared to normal color vision mode. Toothguide Trainer computer program improved color matching results. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Constructing an Educational Mars Simulation

    NASA Technical Reports Server (NTRS)

    Henke, Stephen A.

    2004-01-01

    January 14th 2004, President George Bush announces his plans to catalyst the space program into a new era of space exploration and discovery. His vision encompasses a robotics program to explore our solar system, a return to the moon, the human exploration of Mars, and to promote international prosperity towards our endeavors. We at NASA now have the task of constructing this vision in a very real timeframe. I have been chosen to begin phase 1 of making this vision a reality. I will be working on creating an Educational Mars Simulation of human exploration of Mars to stimulate interest and involvement with the project from investors and the community. GRC s Computer Services Division (CSD) in collaboration with the Office of Education Programs will be designing models, constructing terrain, and programming this simulation to create a realistic portrayal of human exploration on mars. With recent and past technological breakthroughs in computing, my primary goal can be accomplished with only the aid of 3-4 software packages. Lightwave 3D is the modeling package we have selected to use for the creation of our digital objects. This includes a Mars pressurized rover, rover cockpit, landscape/terrain, and habitat. Once we have the models completed they need textured so Photoshop and Macromedia Fireworks are handy for bringing these objects to life. Before directly importing all of this data into a simulation environment, it is necessary to first render a stunning animation of the desired final product. This animation with represent what we hope to capture out of the simulation and it will include all of the accessories like ray-tracing, fog effects, shadows, anti-aliasing, particle effects, volumetric lighting, and lens flares. Adobe Premier will more than likely be used for video editing and adding ambient noises and music. Lastly, V-Tree is the real-time 3D graphics engine which will facilitate our realistic simulation. Additional information is included in the original extended abstract.

  12. Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision

    NASA Astrophysics Data System (ADS)

    Gai, Qiyang

    2018-01-01

    Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.

  13. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  14. Physics-based simulations of aerial attacks by peregrine falcons reveal that stooping at high speed maximizes catch success against agile prey

    PubMed Central

    Hildenbrandt, Hanno

    2018-01-01

    The peregrine falcon Falco peregrinus is renowned for attacking its prey from high altitude in a fast controlled dive called a stoop. Many other raptors employ a similar mode of attack, but the functional benefits of stooping remain obscure. Here we investigate whether, when, and why stooping promotes catch success, using a three-dimensional, agent-based modeling approach to simulate attacks of falcons on aerial prey. We simulate avian flapping and gliding flight using an analytical quasi-steady model of the aerodynamic forces and moments, parametrized by empirical measurements of flight morphology. The model-birds’ flight control inputs are commanded by their guidance system, comprising a phenomenological model of its vision, guidance, and control. To intercept its prey, model-falcons use the same guidance law as missiles (pure proportional navigation); this assumption is corroborated by empirical data on peregrine falcons hunting lures. We parametrically vary the falcon’s starting position relative to its prey, together with the feedback gain of its guidance loop, under differing assumptions regarding its errors and delay in vision and control, and for three different patterns of prey motion. We find that, when the prey maneuvers erratically, high-altitude stoops increase catch success compared to low-altitude attacks, but only if the falcon’s guidance law is appropriately tuned, and only given a high degree of precision in vision and control. Remarkably, the optimal tuning of the guidance law in our simulations coincides closely with what has been observed empirically in peregrines. High-altitude stoops are shown to be beneficial because their high airspeed enables production of higher aerodynamic forces for maneuvering, and facilitates higher roll agility as the wings are tucked, each of which is essential to catching maneuvering prey at realistic response delays. PMID:29649207

  15. Detecting Motion from a Moving Platform; Phase 1: Biomimetic Vision Sensor

    DTIC Science & Technology

    2011-11-01

    optical design software, Zemax , was used to explore various optical configurations that led to the optical front-ends of the hardware prototypes...and a Truly Curved Surface 4.2. Modeling and Simulation Simulations were performed using both Zemax and MATLAB. In particular, the various...tradeoffs for light propagation through the front-end optics were investigated by simulating with Zemax , then building the physical optics for the best

  16. The Advanced Human Eye Model (AHEM): a personal binocular eye modeling system inclusive of refraction, diffraction, and scatter.

    PubMed

    Donnelly, William

    2008-11-01

    To present a commercially available software tool for creating eye models to assist the development of ophthalmic optics and instrumentation, simulate ailments or surgery-induced changes, explore vision research questions, and provide assistance to clinicians in planning treatment or analyzing clinical outcomes. A commercially available eye modeling system was developed, the Advanced Human Eye Model (AHEM). Two mainstream optical software engines, ZEMAX (ZEMAX Development Corp) and ASAP (Breault Research Organization), were used to construct a similar software eye model and compared. The method of using the AHEM is described and various eye modeling scenarios are created. These scenarios consist of retinal imaging of targets and sources; optimization capability; spectacles, contact lens, and intraocular lens insertion and correction; Zernike surface deformation on the cornea; cataract simulation and scattering; a gradient index lens; a binocular mode; a retinal implant; system import/export; and ray path exploration. Similarity of the two different optical software engines showed validity to the mechanism of the AHEM. Metrics and graphical data are generated from the various modeling scenarios particular to their input specifications. The AHEM is a user-friendly commercially available software tool from Breault Research Organization, which can assist the design of ophthalmic optics and instrumentation, simulate ailments or refractive surgery-induced changes, answer vision research questions, or assist clinicians in planning treatment or analyzing clinical outcomes.

  17. The biomechanical significance of pulley on binocular vision.

    PubMed

    Guo, Hongmei; Gao, Zhipeng; Chen, Weiyi

    2016-12-28

    Pulleys have been reported as the functional origins of the rectus extraocular muscles (EOMs). However, biomechanical significance of pulleys on binocular vision has not been reported. Three eye movement models, i.e., non-pulley model, passive-pulley model, and active-pulley model, are used to simulate the horizontal movement of the eyes from the primary position to the left direction in the range of 1°-30°. The resultant forces of six EOMs along both orthogonal directions (i.e., the x-axis and y-axis defined in this paper) in the horizontal plane are calculated using the three models. The resultant force along the y-axis of the left eye for non-pulley model are significantly larger than that of the other two pulley models. The difference of the force, between the left eye and the right eye in non-pulley model, is larger than those in the other two pulley models along x-axis and y-axis. The pulley models present more biomechanical advantage on the horizontally binocular vision than the non-pulley model. Combining with the previous imaging evidences of pulleys, the results show that pulley model coincides well with the real physiological conditions.

  18. USC orthogonal multiprocessor for image processing with neural networks

    NASA Astrophysics Data System (ADS)

    Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid

    1990-07-01

    This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.

  19. How detrimental is eye movement during photorefractive keratectomy to the patient's postoperative vision?

    NASA Astrophysics Data System (ADS)

    Taylor, Natalie M.; van Saarloos, Paul P.; Eikelboom, Robert H.

    2000-06-01

    This study aimed to gauge the effect of the patient's eye movement during Photo Refractive Keratectomy (PRK) on post- operative vision. A computer simulation of both the PRK procedure and the visual outcome has been performed. The PRK simulation incorporated the pattern of movement of the laser beam to perform a given correction, the beam characteristics, an initial corneal profile, and an eye movement scenario; and generated the corrected corneal profile. The regrowth of the epithelium was simulated by selecting the smoothing filter which, when applied to a corrected cornea with no patient eye movement, produced similar ray tracing results to the original corneal model. Ray tracing several objects, such as letters of various contrast and sizes was performed to assess the quality of the post-operative vision. Eye movement scenarios included no eye movement, constant decentration and normally distributed random eye movement of varying magnitudes. Random eye movement of even small amounts, such as 50 microns reduces the contrast sensitivity of the image. Constant decentration decenters the projected image on the retina, and in extreme cases can lead to astigmatism. Eye movements of the magnitude expected during laser refractive surgery have minimal effect on the final visual outcome.

  20. Theory research of seam recognition and welding torch pose control based on machine vision

    NASA Astrophysics Data System (ADS)

    Long, Qiang; Zhai, Peng; Liu, Miao; He, Kai; Wang, Chunyang

    2017-03-01

    At present, the automation requirement of the welding become higher, so a method of the welding information extraction by vision sensor is proposed in this paper, and the simulation with the MATLAB has been conducted. Besides, in order to improve the quality of robot automatic welding, an information retrieval method for welding torch pose control by visual sensor is attempted. Considering the demands of welding technology and engineering habits, the relative coordinate systems and variables are strictly defined, and established the mathematical model of the welding pose, and verified its feasibility by using the MATLAB simulation in the paper, these works lay a foundation for the development of welding off-line programming system with high precision and quality.

  1. A Computational Model for Aperture Control in Reach-to-Grasp Movement Based on Predictive Variability

    PubMed Central

    Takemura, Naohiro; Fukui, Takao; Inui, Toshio

    2015-01-01

    In human reach-to-grasp movement, visual occlusion of a target object leads to a larger peak grip aperture compared to conditions where online vision is available. However, no previous computational and neural network models for reach-to-grasp movement explain the mechanism of this effect. We simulated the effect of online vision on the reach-to-grasp movement by proposing a computational control model based on the hypothesis that the grip aperture is controlled to compensate for both motor variability and sensory uncertainty. In this model, the aperture is formed to achieve a target aperture size that is sufficiently large to accommodate the actual target; it also includes a margin to ensure proper grasping despite sensory and motor variability. To this end, the model considers: (i) the variability of the grip aperture, which is predicted by the Kalman filter, and (ii) the uncertainty of the object size, which is affected by visual noise. Using this model, we simulated experiments in which the effect of the duration of visual occlusion was investigated. The simulation replicated the experimental result wherein the peak grip aperture increased when the target object was occluded, especially in the early phase of the movement. Both predicted motor variability and sensory uncertainty play important roles in the online visuomotor process responsible for grip aperture control. PMID:26696874

  2. LED light design method for high contrast and uniform illumination imaging in machine vision.

    PubMed

    Wu, Xiaojun; Gao, Guangming

    2018-03-01

    In machine vision, illumination is very critical to determine the complexity of the inspection algorithms. Proper lights can obtain clear and sharp images with the highest contrast and low noise between the interested object and the background, which is conducive to the target being located, measured, or inspected. Contrary to the empirically based trial-and-error convention to select the off-the-shelf LED light in machine vision, an optimization algorithm for LED light design is proposed in this paper. It is composed of the contrast optimization modeling and the uniform illumination technology for non-normal incidence (UINI). The contrast optimization model is built based on the surface reflection characteristics, e.g., the roughness, the reflective index, and light direction, etc., to maximize the contrast between the features of interest and the background. The UINI can keep the uniformity of the optimized lighting by the contrast optimization model. The simulation and experimental results demonstrate that the optimization algorithm is effective and suitable to produce images with the highest contrast and uniformity, which is very inspirational to the design of LED illumination systems in machine vision.

  3. Assistive peripheral phosphene arrays deliver advantages in obstacle avoidance in simulated end-stage retinitis pigmentosa: a virtual-reality study

    NASA Astrophysics Data System (ADS)

    Zapf, Marc Patrick H.; Boon, Mei-Ying; Lovell, Nigel H.; Suaning, Gregg J.

    2016-04-01

    Objective. The prospective efficacy of peripheral retinal prostheses for guiding orientation and mobility in the absence of residual vision, as compared to an implant for the central visual field (VF), was evaluated using simulated prosthetic vision (SPV). Approach. Sighted volunteers wearing a head-mounted display performed an obstacle circumvention task under SPV. Mobility and orientation performance with three layouts of prosthetic vision were compared: peripheral prosthetic vision of higher visual acuity (VA) but limited VF, of wider VF but limited VA, as well as centrally restricted prosthetic vision. Learning curves using these layouts were compared fitting an exponential model to the mobility and orientation measures. Main results. Using peripheral layouts, performance was superior to the central layout. Walking speed with both higher-acuity and wider-angle layouts was 5.6% higher, and mobility errors reduced by 46.4% and 48.6%, respectively, as compared to the central layout. The wider-angle layout yielded the least number of collisions, 63% less than the higher-acuity and 73% less than the central layout. Using peripheral layouts, the number of visual-scanning related head movements was 54.3% (higher-acuity) and 60.7% (wider-angle) lower, as compared to the central layout, and the ratio of time standing versus time walking was 51.9% and 61.5% lower, respectively. Learning curves did not differ between layouts, except for time standing versus time walking, where both peripheral layouts achieved significantly lower asymptotic values compared to the central layout. Significance. Beyond complementing residual vision for an improved performance, peripheral prosthetic vision can effectively guide mobility in the later stages of retinitis pigmentosa (RP) without residual vision. Further, the temporal dynamics of learning peripheral and central prosthetic vision are similar. Therefore, development of a peripheral retinal prosthesis and early implantation to alleviate VF constriction in RP should be considered to extend the target group and the time of benefit for potential retinal prosthesis implantees.

  4. Learning prosthetic vision: a virtual-reality study.

    PubMed

    Chen, Spencer C; Hallum, Luke E; Lovell, Nigel H; Suaning, Gregg J

    2005-09-01

    Acceptance of prosthetic vision will be heavily dependent on the ability of recipients to form useful information from such vision. Training strategies to accelerate learning and maximize visual comprehension would need to be designed in the light of the factors affecting human learning under prosthetic vision. Some of these potential factors were examined in a visual acuity study using the Landolt C optotype under virtual-reality simulation of prosthetic vision. Fifteen normally sighted subjects were tested for 10-20 sessions. Potential learning factors were tested at p < 0.05 with regression models. Learning was most evident across-sessions, though 17% of sessions did express significant within-session trends. Learning was highly concentrated toward a critical range of optotype sizes, and subjects were less capable in identifying the closed optotype (a Landolt C with no gap, forming a closed annulus). Training for implant recipients should target these critical sizes and the closed optotype to extend the limit of visual comprehension. Although there was no evidence that image processing affected overall learning, subjects showed varying personal preferences.

  5. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Merriam, E. W.; Becker, J. D.

    1973-01-01

    A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.

  6. Catheter Insertion Reference Trajectory Construction Method Using Photoelastic Stress Analysis for Quantification of Respect for Tissue During Endovascular Surgery Simulation

    NASA Astrophysics Data System (ADS)

    Tercero, Carlos; Ikeda, Seiichi; Fukuda, Toshio; Arai, Fumihito; Negoro, Makoto; Takahashi, Ikuo

    2011-10-01

    There is a need to develop quantitative evaluation for simulator based training in medicine. Photoelastic stress analysis can be used in human tissue modeling materials; this enables the development of simulators that measure respect for tissue. For applying this to endovascular surgery, first we present a model of saccular aneurism where stress variation during micro-coils deployment is measured, and then relying on a bi-planar vision system we measure a catheter trajectory and compare it to a reference trajectory considering respect for tissue. New photoelastic tissue modeling materials will expand the applications of this technology to other medical training domains.

  7. A digital retina-like low-level vision processor.

    PubMed

    Mertoguno, S; Bourbakis, N G

    2003-01-01

    This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.

  8. Vision Research for Flight Simulation. Final Report.

    ERIC Educational Resources Information Center

    Richards, Whitman, Ed.; Dismukes, Key, Ed.

    Based on a workshop on vision research issues in flight-training simulators held in June 1980, this report focuses on approaches for the conduct of research on what visual information is needed for simulation and how it can best be presented. An introduction gives an overview of the workshop and describes the contents of the report. Section 1…

  9. Intelligent robot control using an adaptive critic with a task control center and dynamic database

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ghaffari, M.; Liao, X.; Alhaj Ali, S. M.

    2006-10-01

    The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the Intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.

  10. Quantification of the impact of hydrology on agricultural production as a result of too dry, too wet or too saline conditions

    NASA Astrophysics Data System (ADS)

    Hack-ten Broeke, Mirjam J. D.; Kroes, Joop G.; Bartholomeus, Ruud P.; van Dam, Jos C.; de Wit, Allard J. W.; Supit, Iwan; Walvoort, Dennis J. J.; van Bakel, P. Jan T.; Ruijtenberg, Rob

    2016-08-01

    For calculating the effects of hydrological measures on agricultural production in the Netherlands a new comprehensive and climate proof method is being developed: WaterVision Agriculture (in Dutch: Waterwijzer Landbouw). End users have asked for a method that considers current and future climate, that can quantify the differences between years and also the effects of extreme weather events. Furthermore they would like a method that considers current farm management and that can distinguish three different causes of crop yield reduction: drought, saline conditions or too wet conditions causing oxygen shortage in the root zone. WaterVision Agriculture is based on the hydrological simulation model SWAP and the crop growth model WOFOST. SWAP simulates water transport in the unsaturated zone using meteorological data, boundary conditions (like groundwater level or drainage) and soil parameters. WOFOST simulates crop growth as a function of meteorological conditions and crop parameters. Using the combination of these process-based models we have derived a meta-model, i.e. a set of easily applicable simplified relations for assessing crop growth as a function of soil type and groundwater level. These relations are based on multiple model runs for at least 72 soil units and the possible groundwater regimes in the Netherlands. So far, we parameterized the model for the crops silage maize and grassland. For the assessment, the soil characteristics (soil water retention and hydraulic conductivity) are very important input parameters for all soil layers of these 72 soil units. These 72 soil units cover all soils in the Netherlands. This paper describes (i) the setup and examples of application of the process-based model SWAP-WOFOST, (ii) the development of the simplified relations based on this model and (iii) how WaterVision Agriculture can be used by farmers, regional government, water boards and others to assess crop yield reduction as a function of groundwater characteristics or as a function of the salt concentration in the root zone for the various soil types.

  11. Simulating age-related changes in color vision to assess the ability of older adults to take medication.

    PubMed

    Skomrock, Lindsay K; Richardson, Virginia E

    2010-03-01

    To determine if simulated, age-related changes in color vision can adversely affect one's ability to properly take medication as simulated by bead selection. Randomized controlled study. University site. University students 18 to 26 years of age without eye disorders that would affect color vision. Yellow-lens glasses to represent age-related color vision changes. The number of correct beads selected and rating of task difficulty. The secondary outcomes were participants' responses based on which colors and color pairs were most difficult to discern and strategies they might have used to select beads. The control group had no difficulties in selecting the appropriate beads, while the experimental group had significantly more mistakes, particularly with colors in the blue-violet spectrum. Average scores for the total number correct for the control and experimental groups were 36 (100%) and 27 (74.4%), P < 0.001, respectively, out of a possible 36 correct. Declines in color vision with age can adversely affect an individual's abilities to appropriately select medications. For patients taking several medications, declines in color vision should be considered when counseling older persons on strategies for compliance. Although more studies are still needed to further generalize these findings to the geriatric population, this study has shown color vision can adversely affect medication compliance.

  12. New weather depiction technology for night vision goggle (NVG) training: 3D virtual/augmented reality scene-weather-atmosphere-target simulation

    NASA Astrophysics Data System (ADS)

    Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.

    2007-04-01

    US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic target - background scenes and display the results in a DirectX environment. This paper will describe our approach and show a brief demonstration of the software capabilities. The work is supported by the SBIR program under contract N61339-06-C-0113.

  13. A neurophysiologically plausible population code model for feature integration explains visual crowding.

    PubMed

    van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W

    2010-01-22

    An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.

  14. Compact, self-contained enhanced-vision system (EVS) sensor simulator

    NASA Astrophysics Data System (ADS)

    Tiana, Carlo

    2007-04-01

    We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels, blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View, resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification authorities; this level of realism will become necessary as operational experience with EVS systems grows.

  15. Colour Coding of Maps for Colour Deficient Observers.

    PubMed

    Røise, Anne Kari; Kvitle, Anne Kristin; Green, Phil

    2016-01-01

    We evaluate the colour coding of a web map traffic information service based on profiles simulating colour vision deficiencies. Based on these simulations and principles for universal design, we propose adjustments of the existing colours creating more readable maps for the colour vision deficient observers.

  16. Virtual wayfinding using simulated prosthetic vision in gaze-locked viewing.

    PubMed

    Wang, Lin; Yang, Liancheng; Dagnelie, Gislin

    2008-11-01

    To assess virtual maze navigation performance with simulated prosthetic vision in gaze-locked viewing, under the conditions of varying luminance contrast, background noise, and phosphene dropout. Four normally sighted subjects performed virtual maze navigation using simulated prosthetic vision in gaze-locked viewing, under five conditions of luminance contrast, background noise, and phosphene dropout. Navigation performance was measured as the time required to traverse a 10-room maze using a game controller, and the number of errors made during the trip. Navigation performance time (1) became stable after 6 to 10 trials, (2) remained similar on average at luminance contrast of 68% and 16% but had greater variation at 16%, (3) was not significantly affected by background noise, and (4) increased by 40% when 30% of phosphenes were removed. Navigation performance time and number of errors were significantly and positively correlated. Assuming that the simulated gaze-locked viewing conditions are extended to implant wearers, such prosthetic vision can be helpful for wayfinding in simple mobility tasks, though phosphene dropout may interfere with performance.

  17. How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision

    PubMed Central

    Cao, Yongqiang; Grossberg, Stephen

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. Interactions between layers 4, 3B, and 2/3 in V1 and V2 carry out stereopsis and 3D boundary formation. Both binocular and monocular information combine to form 3D boundary and surface representations. Surface contour surface-to-boundary feedback from V2 thin stripes to V2 pale stripes combines computationally complementary boundary and surface formation properties, leading to a single consistent percept, while also eliminating redundant 3D boundaries, and triggering figure-ground perception. False binocular boundary matches are eliminated by Gestalt grouping properties during boundary formation. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. These model mechanisms have also simulated properties of 3D neon color spreading, binocular rivalry, 3D Necker cube, and many examples of 3D figure-ground separation. PMID:25309467

  18. How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision.

    PubMed

    Cao, Yongqiang; Grossberg, Stephen

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. Interactions between layers 4, 3B, and 2/3 in V1 and V2 carry out stereopsis and 3D boundary formation. Both binocular and monocular information combine to form 3D boundary and surface representations. Surface contour surface-to-boundary feedback from V2 thin stripes to V2 pale stripes combines computationally complementary boundary and surface formation properties, leading to a single consistent percept, while also eliminating redundant 3D boundaries, and triggering figure-ground perception. False binocular boundary matches are eliminated by Gestalt grouping properties during boundary formation. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. These model mechanisms have also simulated properties of 3D neon color spreading, binocular rivalry, 3D Necker cube, and many examples of 3D figure-ground separation.

  19. Development of VIPER: a simulator for assessing vision performance of warfighters

    NASA Astrophysics Data System (ADS)

    Familoni, Jide; Thompson, Roger; Moyer, Steve; Mueller, Gregory; Williams, Tim; Nguyen, Hung-Quang; Espinola, Richard L.; Sia, Rose K.; Ryan, Denise S.; Rivers, Bruce A.

    2016-05-01

    Background: When evaluating vision, it is important to assess not just the ability to read letters on a vision chart, but also how well one sees in real life scenarios. As part of the Warfighter Refractive Eye Surgery Program (WRESP), visual outcomes are assessed before and after refractive surgery. A Warfighter's ability to read signs and detect and identify objects is crucial, not only when deployed in a military setting, but also in their civilian lives. Objective: VIPER, a VIsion PERformance simulator was envisioned as actual video-based simulated driving to test warfighters' functional vision under realistic conditions. Designed to use interactive video image controlled environments at daytime, dusk, night, and with thermal imaging vision, it simulates the experience of viewing and identifying road signs and other objects while driving. We hypothesize that VIPER will facilitate efficient and quantifiable assessment of changes in vision and measurement of functional military performance. Study Design: Video images were recorded on an isolated 1.1 mile stretch of road with separate target sets of six simulated road signs and six objects of military interest, separately. The video footage were integrated with customdesigned C++ based software that presented the simulated drive to an observer on a computer monitor at 10, 20 or 30 miles/hour. VIPER permits the observer to indicate when a target is seen and when it is identified. Distances at which the observer recognizes and identifies targets are automatically logged. Errors in recognition and identification are also recorded. This first report describes VIPER's development and a preliminary study to establish a baseline for its performance. In the study, nine soldiers viewed simulations at 10 miles/hour and 30 miles/hour, run in randomized order for each participant seated at 36 inches from the monitor. Relevance: Ultimately, patients are interested in how their vision will affect their ability to perform daily activities. In the military context, in addition to reading road signs, this includes vision with night sensors and identification of objects of military interest. Once completed and validated, VIPER will be used to evaluate functional performance before and after refractive surgery. Results: This initial study was to prove the principle, and its results at the time of this publication were very preliminary. Nine Soldiers viewed visible-day and IR-day VIPER simulations with civilian and military targets, separately, at 10 and 30 miles/hour. Analyses were performed separately for visible and IR, and also aggregated. Only the civilian targets are discussed in this report. At 10 miles/hour, the population detected civilian road signs at an aggregated average of 90.11 +/- 64.20 m, and identified them at 26.93 +/- 22.27m. At 30 miles/hour, the corresponding distances were 103.03 +/- 58.81 and 26.26 +/- 8.55, respectively. Conclusion: This preliminary report proves the principle and suggests that VIPER could be a useful clinical tool in longitudinal assessment of functional vision in warfighters.

  20. What Aspects of Vision Facilitate Haptic Processing?

    ERIC Educational Resources Information Center

    Millar, Susanna; Al-Attar, Zainab

    2005-01-01

    We investigate how vision affects haptic performance when task-relevant visual cues are reduced or excluded. The task was to remember the spatial location of six landmarks that were explored by touch in a tactile map. Here, we use specially designed spectacles that simulate residual peripheral vision, tunnel vision, diffuse light perception, and…

  1. Modeling peripheral vision for moving target search and detection.

    PubMed

    Yang, Ji Hyun; Huston, Jesse; Day, Michael; Balogh, Imre

    2012-06-01

    Most target search and detection models focus on foveal vision. In reality, peripheral vision plays a significant role, especially in detecting moving objects. There were 23 subjects who participated in experiments simulating target detection tasks in urban and rural environments while their gaze parameters were tracked. Button responses associated with foveal object and peripheral object (PO) detection and recognition were recorded. In an urban scenario, pedestrians appearing in the periphery holding guns were threats and pedestrians with empty hands were non-threats. In a rural scenario, non-U.S. unmanned aerial vehicles (UAVs) were considered threats and U.S. UAVs non-threats. On average, subjects missed detecting 2.48 POs among 50 POs in the urban scenario and 5.39 POs in the rural scenario. Both saccade reaction time and button reaction time can be predicted by peripheral angle and entrance speed of POs. Fast moving objects were detected faster than slower objects and POs appearing at wider angles took longer to detect than those closer to the gaze center. A second-order mixed-effect model was applied to provide each subject's prediction model for peripheral target detection performance as a function of eccentricity angle and speed. About half the subjects used active search patterns while the other half used passive search patterns. An interactive 3-D visualization tool was developed to provide a representation of macro-scale head and gaze movement in the search and target detection task. An experimentally validated stochastic model of peripheral vision in realistic target detection scenarios was developed.

  2. Blunt forehead trauma and optic canal involvement: finite element analysis of anterior skull base and orbit on causes of vision impairment.

    PubMed

    Huempfner-Hierl, Heike; Bohne, Alexander; Wollny, Gert; Sterker, Ina; Hierl, Thomas

    2015-10-01

    Clinical studies report on vision impairment after blunt frontal head trauma. A possible cause is damage to the optic nerve bundle within the optic canal due to microfractures of the anterior skull base leading to indirect traumatic optic neuropathy. A finite element study simulating impact forces on the paramedian forehead in different grades was initiated. The set-up consisted of a high-resolution skull model with about 740 000 elements, a blunt impactor and was solved in a transient time-dependent simulation. Individual bone material parameters were calculated for each volume element to increase realism. Results showed stress propagation from the frontal impact towards the optic foramen and the chiasm even at low-force fist-like impacts. Higher impacts produced stress patterns corresponding to typical fracture patterns of the anterior skull base including the optic canal. Transient simulation discerned two stress peaks equalling oscillation. It can be concluded that even comparatively low stresses and oscillation in the optic foramen may cause micro damage undiscerned by CT or MRI explaining consecutive vision loss. Higher impacts lead to typical comminuted fractures, which may affect the integrity of the optic canal. Finite element simulation can be effectively used in studying head trauma and its clinical consequences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. Peripheral vision cues : their effect on pilot performance during instrument landing approaches and recoveries from unusual attitudes.

    DOT National Transportation Integrated Search

    1968-05-01

    The study explores the effects of peripheral vision cues on the performance of a 20 ATR pilots during simulated instrument landing approaches in a Boeing 720 jet aircraft simulator. Recoveries from unusal attitudes were also investigated. Results of ...

  4. A comparison of effects of peripheral vision cues on pilot performance during instrument flight in dissimilar aircraft simulators.

    DOT National Transportation Integrated Search

    1968-09-01

    Pilot response to peripheral vision cues relating to aircraft bank angle was studied during instrument flight in two simulators representing (1) a conventional, medium weight, piston engine airliner, and (2) a heavy, jet engine, sweptwing transport. ...

  5. Demonstration of a 3D vision algorithm for space applications

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P. (Editor)

    1987-01-01

    This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  6. Simulation of the Fissureless Technique for Thoracoscopic Segmentectomy Using Rapid Prototyping

    PubMed Central

    Nakada, Takeo; Inagaki, Takuya

    2014-01-01

    The fissureless lobectomy or anterior fissureless technique is a novel surgical technique, which avoids dissection of the lung parenchyma over the pulmonary artery during lobectomy by open thoracotomy approach or direct vision thoracoscopic surgery. This technique is indicated for fused lobes. We present two cases where thoracoscopic pulmonary segmentectomy was performed using the fissureless technique simulated by three-dimensional (3D) pulmonary models. The 3D model and rapid prototyping provided an accurate anatomical understanding of the operative field in both cases. We believe that the construction of these models is useful for thoracoscopic and other complicated surgeries of the chest. PMID:24633132

  7. Simulation of the fissureless technique for thoracoscopic segmentectomy using rapid prototyping.

    PubMed

    Akiba, Tadashi; Nakada, Takeo; Inagaki, Takuya

    2015-01-01

    The fissureless lobectomy or anterior fissureless technique is a novel surgical technique, which avoids dissection of the lung parenchyma over the pulmonary artery during lobectomy by open thoracotomy approach or direct vision thoracoscopic surgery. This technique is indicated for fused lobes. We present two cases where thoracoscopic pulmonary segmentectomy was performed using the fissureless technique simulated by three-dimensional (3D) pulmonary models. The 3D model and rapid prototyping provided an accurate anatomical understanding of the operative field in both cases. We believe that the construction of these models is useful for thoracoscopic and other complicated surgeries of the chest.

  8. Optimum Laser Beam Characteristics for Achieving Smoother Ablations in Laser Vision Correction.

    PubMed

    Verma, Shwetabh; Hesser, Juergen; Arba-Mosquera, Samuel

    2017-04-01

    Controversial opinions exist regarding optimum laser beam characteristics for achieving smoother ablations in laser-based vision correction. The purpose of the study was to outline a rigorous simulation model for simulating shot-by-shot ablation process. The impact of laser beam characteristics like super Gaussian order, truncation radius, spot geometry, spot overlap, and lattice geometry were tested on ablation smoothness. Given the super Gaussian order, the theoretical beam profile was determined following Lambert-Beer model. The intensity beam profile originating from an excimer laser was measured with a beam profiler camera. For both, the measured and theoretical beam profiles, two spot geometries (round and square spots) were considered, and two types of lattices (reticular and triangular) were simulated with varying spot overlaps and ablated material (cornea or polymethylmethacrylate [PMMA]). The roughness in ablation was determined by the root-mean-square per square root of layer depth. Truncating the beam profile increases the roughness in ablation, Gaussian profiles theoretically result in smoother ablations, round spot geometries produce lower roughness in ablation compared to square geometry, triangular lattices theoretically produce lower roughness in ablation compared to the reticular lattice, theoretically modeled beam profiles show lower roughness in ablation compared to the measured beam profile, and the simulated roughness in ablation on PMMA tends to be lower than on human cornea. For given input parameters, proper optimum parameters for minimizing the roughness have been found. Theoretically, the proposed model can be used for achieving smoothness with laser systems used for ablation processes at relatively low cost. This model may improve the quality of results and could be directly applied for improving postoperative surface quality.

  9. Heading assessment by "tunnel vision" patients and control subjects standing or walking in a virtual reality environment.

    PubMed

    Apfelbaum, Henry; Pelah, Adar; Peli, Eli

    2007-01-01

    Virtual reality locomotion simulators are a promising tool for evaluating the effectiveness of vision aids to mobility for people with low vision. This study examined two factors to gain insight into the verisimilitude requirements of the test environment: the effects of treadmill walking and the suitability of using controls as surrogate patients. Ten "tunnel vision" patients with retinitis pigmentosa (RP) were tasked with identifying which side of a clearly visible obstacle their heading through the virtual environment would lead them, and were scored both on accuracy and on their distance from the obstacle when they responded. They were tested both while walking on a treadmill and while standing, as they viewed a scene representing progress through a shopping mall. Control subjects, each wearing a head-mounted field restriction to simulate the vision of a paired patient, were also tested. At wide angles of approach, controls and patients performed with a comparably high degree of accuracy, and made their choices at comparable distances from the obstacle. At narrow angles of approach, patients' accuracy increased when walking, while controls' accuracy decreased. When walking, both patients and controls delayed their decisions until closer to the obstacle. We conclude that a head-mounted field restriction is not sufficient for simulating tunnel vision, but that the improved performance observed for walking compared to standing suggests that a walking interface (such as a treadmill) may be essential for eliciting natural perceptually-guided behavior in virtual reality locomotion simulators.

  10. Engineering workstation: Sensor modeling

    NASA Technical Reports Server (NTRS)

    Pavel, M; Sweet, B.

    1993-01-01

    The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.

  11. Recommendations for the Implementation of the LASSO Workflow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, William I; Vogelmann, Andrew M; Cheng, Xiaoping

    The U. S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Research Fa-cility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability, envisioned in the ARM Decadal Vision (U.S. Department of Energy 2014), subsequently has been named the Large-Eddy Simu-lation (LES) ARM Symbiotic Simulation and Observation (LASSO) project, and it has an initial focus of shallow convection at the ARM Southern Great Plains (SGP) atmospheric observatory. This report documents the recommendations resulting from the pilot project to be considered by ARM for imple-mentation into routinemore » operations. During the pilot phase, LASSO has evolved from the initial vision outlined in the pilot project white paper (Gustafson and Vogelmann 2015) to what is recommended in this report. Further details on the overall LASSO project are available at https://www.arm.gov/capabilities/modeling/lasso. Feedback regarding LASSO and the recommendations in this report can be directed to William Gustafson, the project principal investigator (PI), and Andrew Vogelmann, the co-principal investigator (Co-PI), via lasso@arm.gov.« less

  12. A Petri-net coordination model for an intelligent mobile robot

    NASA Technical Reports Server (NTRS)

    Wang, F.-Y.; Kyriakopoulos, K. J.; Tsolkas, A.; Saridis, G. N.

    1990-01-01

    The authors present a Petri net model of the coordination level of an intelligent mobile robot system (IMRS). The purpose of this model is to specify the integration of the individual efforts on path planning, supervisory motion control, and vision systems that are necessary for the autonomous operation of the mobile robot in a structured dynamic environment. This is achieved by analytically modeling the various units of the system as Petri net transducers and explicitly representing the task precedence and information dependence among them. The model can also be used to simulate the task processing and to evaluate the efficiency of operations and the responsibility of decisions in the coordination level of the IMRS. Some simulation results on the task processing and learning are presented.

  13. Statistically Modeling I-V Characteristics of CNT-FET with LASSO

    NASA Astrophysics Data System (ADS)

    Ma, Dongsheng; Ye, Zuochang; Wang, Yan

    2017-08-01

    With the advent of internet of things (IOT), the need for studying new material and devices for various applications is increasing. Traditionally we build compact models for transistors on the basis of physics. But physical models are expensive and need a very long time to adjust for non-ideal effects. As the vision for the application of many novel devices is not certain or the manufacture process is not mature, deriving generalized accurate physical models for such devices is very strenuous, whereas statistical modeling is becoming a potential method because of its data oriented property and fast implementation. In this paper, one classical statistical regression method, LASSO, is used to model the I-V characteristics of CNT-FET and a pseudo-PMOS inverter simulation based on the trained model is implemented in Cadence. The normalized relative mean square prediction error of the trained model versus experiment sample data and the simulation results show that the model is acceptable for digital circuit static simulation. And such modeling methodology can extend to general devices.

  14. Perceived image quality with simulated segmented bifocal corrections

    PubMed Central

    Dorronsoro, Carlos; Radhakrishnan, Aiswaryah; de Gracia, Pablo; Sawides, Lucie; Marcos, Susana

    2016-01-01

    Bifocal contact or intraocular lenses use the principle of simultaneous vision to correct for presbyopia. A modified two-channel simultaneous vision simulator provided with an amplitude transmission spatial light modulator was used to optically simulate 14 segmented bifocal patterns (+ 3 diopters addition) with different far/near pupillary distributions of equal energy. Five subjects with paralyzed accommodation evaluated image quality and subjective preference through the segmented bifocal corrections. There are strong and systematic perceptual differences across the patterns, subjects and observation distances: 48% of the conditions evaluated were significantly preferred or rejected. Optical simulations (in terms of through-focus Strehl ratio from Hartmann-Shack aberrometry) accurately predicted the pattern producing the highest perceived quality in 4 out of 5 patients, both for far and near vision. These perceptual differences found arise primarily from optical grounds, but have an important neural component. PMID:27895981

  15. Use of statistical study methods for the analysis of the results of the imitation modeling of radiation transfer

    NASA Astrophysics Data System (ADS)

    Alekseenko, M. A.; Gendrina, I. Yu.

    2017-11-01

    Recently, due to the abundance of various types of observational data in the systems of vision through the atmosphere and the need for their processing, the use of various methods of statistical research in the study of such systems as correlation-regression analysis, dynamic series, variance analysis, etc. is actual. We have attempted to apply elements of correlation-regression analysis for the study and subsequent prediction of the patterns of radiation transfer in these systems same as in the construction of radiation models of the atmosphere. In this paper, we present some results of statistical processing of the results of numerical simulation of the characteristics of vision systems through the atmosphere obtained with the help of a special software package.1

  16. Real-time simulation of the retina allowing visualization of each processing stage

    NASA Astrophysics Data System (ADS)

    Teeters, Jeffrey L.; Werblin, Frank S.

    1991-08-01

    The retina computes to let us see, but can we see the retina compute? Until now, the answer has been no, because the unconscious nature of the processing hides it from our view. Here the authors describe a method of seeing computations performed throughout the retina. This is achieved by using neurophysiological data to construct a model of the retina, and using a special-purpose image processing computer (PIPE) to implement the model in real time. Processing in the model is organized into stages corresponding to computations performed by each retinal cell type. The final stage is the transient (change detecting) ganglion cell. A CCD camera forms the input image, and the activity of a selected retinal cell type is the output which is displayed on a TV monitor. By changing the retina cell driving the monitor, the progressive transformations of the image by the retina can be observed. These simulations demonstrate the ubiquitous presence of temporal and spatial variations in the patterns of activity generated by the retina which are fed into the brain. The dynamical aspects make these patterns very different from those generated by the common DOG (Difference of Gaussian) model of receptive field. Because the retina is so successful in biological vision systems, the processing described here may be useful in machine vision.

  17. Microscopic transport model animation visualisation on KML base

    NASA Astrophysics Data System (ADS)

    Yatskiv, I.; Savrasovs, M.

    2012-10-01

    By reading classical literature devoted to the simulation theory it could be found that one of the greatest possibilities of simulation is the ability to present processes inside the system by animation. This gives to the simulation model additional value during presentation of simulation results for the public and authorities who are not familiar enough with simulation. That is why most of universal and specialised simulation tools have the ability to construct 2D and 3D representation of the model. Usually the development of such representation could take much time and there must be put a lot forces into creating an adequate 3D representation of the model. For long years such well-known microscopic traffic flow simulation software tools as VISSIM, AIMSUN and PARAMICS have had a possibility to produce 2D and 3D animation. But creation of realistic 3D model of the place where traffic flows are simulated, even in these professional software tools it is a hard and time consuming action. The goal of this paper is to describe the concepts of use the existing on-line geographical information systems for visualisation of animation produced by simulation software. For demonstration purposes the following technologies and tools have been used: PTV VISION VISSIM, KML and Google Earth.

  18. An augmented-reality edge enhancement application for Google Glass.

    PubMed

    Hwang, Alex D; Peli, Eli

    2014-08-01

    Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.

  19. ATR applications of minimax entropy models of texture and shape

    NASA Astrophysics Data System (ADS)

    Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.

    2001-10-01

    Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.

  20. The Importance of Simulation Workflow and Data Management in the Accelerated Climate Modeling for Energy Project

    NASA Astrophysics Data System (ADS)

    Bader, D. C.

    2015-12-01

    The Accelerated Climate Modeling for Energy (ACME) Project is concluding its first year. Supported by the Office of Science in the U.S. Department of Energy (DOE), its vision is to be "an ongoing, state-of-the-science Earth system modeling, modeling simulation and prediction project that optimizes the use of DOE laboratory resources to meet the science needs of the nation and the mission needs of DOE." Included in the "laboratory resources," is a large investment in computational, network and information technologies that will be utilized to both build better and more accurate climate models and broadly disseminate the data they generate. Current model diagnostic analysis and data dissemination technologies will not scale to the size of the simulations and the complexity of the models envisioned by ACME and other top tier international modeling centers. In this talk, the ACME Workflow component plans to meet these future needs will be described and early implementation examples will be highlighted.

  1. The Advanced Modeling, Simulation and Analysis Capability Roadmap Vision for Engineering

    NASA Technical Reports Server (NTRS)

    Zang, Thomas; Lieber, Mike; Norton, Charles; Fucik, Karen

    2006-01-01

    This paper summarizes a subset of the Advanced Modeling Simulation and Analysis (AMSA) Capability Roadmap that was developed for NASA in 2005. The AMSA Capability Roadmap Team was chartered to "To identify what is needed to enhance NASA's capabilities to produce leading-edge exploration and science missions by improving engineering system development, operations, and science understanding through broad application of advanced modeling, simulation and analysis techniques." The AMSA roadmap stressed the need for integration, not just within the science, engineering and operations domains themselves, but also across these domains. Here we discuss the roadmap element pertaining to integration within the engineering domain, with a particular focus on implications for future observatory missions. The AMSA products supporting the system engineering function are mission information, bounds on information quality, and system validation guidance. The Engineering roadmap element contains 5 sub-elements: (1) Large-Scale Systems Models, (2) Anomalous Behavior Models, (3) advanced Uncertainty Models, (4) Virtual Testing Models, and (5) space-based Robotics Manufacture and Servicing Models.

  2. A Neurophysiologically Plausible Population Code Model for Feature Integration Explains Visual Crowding

    PubMed Central

    van den Berg, Ronald; Roerdink, Jos B. T. M.; Cornelissen, Frans W.

    2010-01-01

    An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality. PMID:20098499

  3. Vision-Based Precision Landings of a Tailsitter UAV

    DTIC Science & Technology

    2010-04-01

    2.2: Schematic of the controller used in simulation. The block diagram shown in Figure 2.2 shows the simulation structure used to simulate the vision...the structure of the flight facility walls, any vibration applied to the structure would potentially change the pose of the cameras. Each camera’s pose...relative to the target in Chap- ter 4, a flat earth assumption was made. In several situations the approximation that the ground over which the UAV is

  4. An Operationally Based Vision Assessment Simulator for Domes

    NASA Technical Reports Server (NTRS)

    Archdeacon, John; Gaska, James; Timoner, Samson

    2012-01-01

    The Operational Based Vision Assessment (OBVA) simulator was designed and built by NASA and the United States Air Force (USAF) to provide the Air Force School of Aerospace Medicine (USAFSAM) with a scientific testing laboratory to study human vision and testing standards in an operationally relevant environment. This paper describes the general design objectives and implementation characteristics of the simulator visual system being created to meet these requirements. A key design objective for the OBVA research simulator is to develop a real-time computer image generator (IG) and display subsystem that can display and update at 120 frame s per second (design target), or at a minimum, 60 frames per second, with minimal transport delay using commercial off-the-shelf (COTS) technology. There are three key parts of the OBVA simulator that are described in this paper: i) the real-time computer image generator, ii) the various COTS technology used to construct the simulator, and iii) the spherical dome display and real-time distortion correction subsystem. We describe the various issues, possible COTS solutions, and remaining problem areas identified by NASA and the USAF while designing and building the simulator for future vision research. We also describe the critically important relationship of the physical display components including distortion correction for the dome consistent with an objective of minimizing latency in the system. The performance of the automatic calibration system used in the dome is also described. Various recommendations for possible future implementations shall also be discussed.

  5. Three-dimensional simulation, surgical navigation and thoracoscopic lung resection

    PubMed Central

    Kanzaki, Masato; Kikkawa, Takuma; Sakamoto, Kei; Maeda, Hideyuki; Wachi, Naoko; Komine, Hiroshi; Oyama, Kunihiro; Murasugi, Masahide; Onuki, Takamasa

    2013-01-01

    This report describes a 3-dimensional (3-D) video-assisted thoracoscopic lung resection guided by a 3-D video navigation system having a patient-specific 3-D reconstructed pulmonary model obtained by preoperative simulation. A 78-year-old man was found to have a small solitary pulmonary nodule in the left upper lobe in chest computed tomography. By a virtual 3-D pulmonary model the tumor was found to be involved in two subsegments (S1 + 2c and S3a). Complete video-assisted thoracoscopic surgery bi-subsegmentectomy was selected in simulation and was performed with lymph node dissection. A 3-D digital vision system was used for 3-D thoracoscopic performance. Wearing 3-D glasses, the patient's actual reconstructed 3-D model on 3-D liquid-crystal displays was observed, and the 3-D intraoperative field and the picture of 3-D reconstructed pulmonary model were compared. PMID:24964426

  6. Evolving EO-1 Sensor Web Testbed Capabilities in Pursuit of GEOSS

    NASA Technical Reports Server (NTRS)

    Mandi, Dan; Ly, Vuong; Frye, Stuart; Younis, Mohamed

    2006-01-01

    A viewgraph presentation to evolve sensor web capabilities in pursuit of capabilities to support Global Earth Observing System of Systems (GEOSS) is shown. The topics include: 1) Vision to Enable Sensor Webs with "Hot Spots"; 2) Vision Extended for Communication/Control Architecture for Missions to Mars; 3) Key Capabilities Implemented to Enable EO-1 Sensor Webs; 4) One of Three Experiments Conducted by UMBC Undergraduate Class 12-14-05 (1 - 3); 5) Closer Look at our Mini-Rovers and Simulated Mars Landscae at GSFC; 6) Beginning to Implement Experiments with Standards-Vision for Integrated Sensor Web Environment; 7) Goddard Mission Services Evolution Center (GMSEC); 8) GMSEC Component Catalog; 9) Core Flight System (CFS) and Extension for GMSEC for Flight SW; 10) Sensor Modeling Language; 11) Seamless Ground to Space Integrated Message Bus Demonstration (completed December 2005); 12) Other Experiments in Queue; 13) Acknowledgements; and 14) References.

  7. Harmony search optimization for HDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Panchal, Aditya

    In high dose-rate (HDR) prostate brachytherapy, multiple catheters are inserted interstitially into the target volume. The process of treating the prostate involves calculating and determining the best dose distribution to the target and organs-at-risk by means of optimizing the time that the radioactive source dwells at specified positions within the catheters. It is the goal of this work to investigate the use of a new optimization algorithm, known as Harmony Search, in order to optimize dwell times for HDR prostate brachytherapy. The new algorithm was tested on 9 different patients and also compared with the genetic algorithm. Simulations were performed to determine the optimal value of the Harmony Search parameters. Finally, multithreading of the simulation was examined to determine potential benefits. First, a simulation environment was created using the Python programming language and the wxPython graphical interface toolkit, which was necessary to run repeated optimizations. DICOM RT data from Varian BrachyVision was parsed and used to obtain patient anatomy and HDR catheter information. Once the structures were indexed, the volume of each structure was determined and compared to the original volume calculated in BrachyVision for validation. Dose was calculated using the AAPM TG-43 point source model of the GammaMed 192Ir HDR source and was validated against Varian BrachyVision. A DVH-based objective function was created and used for the optimization simulation. Harmony Search and the genetic algorithm were implemented as optimization algorithms for the simulation and were compared against each other. The optimal values for Harmony Search parameters (Harmony Memory Size [HMS], Harmony Memory Considering Rate [HMCR], and Pitch Adjusting Rate [PAR]) were also determined. Lastly, the simulation was modified to use multiple threads of execution in order to achieve faster computational times. Experimental results show that the volume calculation that was implemented in this thesis was within 2% of the values computed by Varian BrachyVision for the prostate, within 3% for the rectum and bladder and 6% for the urethra. The calculation of dose compared to BrachyVision was determined to be different by only 0.38%. Isodose curves were also generated and were found to be similar to BrachyVision. The comparison between Harmony Search and genetic algorithm showed that Harmony Search was over 4 times faster when compared over multiple data sets. The optimal Harmony Memory Size was found to be 5 or lower; the Harmony Memory Considering Rate was determined to be 0.95, and the Pitch Adjusting Rate was found to be 0.9. Ultimately, the effect of multithreading showed that as intensive computations such as optimization and dose calculation are involved, the threads of execution scale with the number of processors, achieving a speed increase proportional to the number of processor cores. In conclusion, this work showed that Harmony Search is a viable alternative to existing algorithms for use in HDR prostate brachytherapy optimization. Coupled with the optimal parameters for the algorithm and a multithreaded simulation, this combination has the capability to significantly decrease the time spent on minimizing optimization problems in the clinic that are time intensive, such as brachytherapy, IMRT and beam angle optimization.

  8. Modeling, Simulation, and Characterization of Distributed Multi-Agent Systems

    DTIC Science & Technology

    2012-01-01

    capabilities (vision, LIDAR , differential global positioning, ultrasonic proximity sensing, etc.), the agents comprising a MAS tend to have somewhat lesser...on the simultaneous localization and mapping ( SLAM ) problem [19]. SLAM acknowledges that externally-provided localization information is not...continually-updated mapping databases, generates a comprehensive representation of the spatial and spectral environment. Many times though, inherent SLAM

  9. Simulation Platform for Vision Aided Inertial Navigation

    DTIC Science & Technology

    2014-09-18

    Brown , R. G., & Hwang , P. Y. (1992). Introduction to Random Signals and Applied Kalman Filtering (2nd ed.). New York: John Wiley & Son. Chowdhary, G...Parameters for Various Timing Standards ( Brown & Hwang , 1992...were then calculated using the true PVA information from the ASPN data. Next, a two-state clock from ( Brown & Hwang , 1992) was used to model the

  10. User Guide for VISION 3.4.7 (Verifiable Fuel Cycle Simulation) Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob J. Jacobson; Robert F. Jeffers; Gretchen E. Matthern

    2011-07-01

    The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters and options; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R&D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating 'what if' scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level. The model is not intended as amore » tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., 'reactor types' not individual reactors and 'separation types' not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation or disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. VISION is comprised of several Microsoft Excel input files, a Powersim Studio core, and several Microsoft Excel output files. All must be co-located in the same folder on a PC to function. You must use Powersim Studio 8 or better. We have tested VISION with the Studio 8 Expert, Executive, and Education versions. The Expert and Education versions work with the number of reactor types of 3 or less. For more reactor types, the Executive version is currently required. The input files are Excel2003 format (xls). The output files are macro-enabled Excel2007 format (xlsm). VISION 3.4 was designed with more flexibility than previous versions, which were structured for only three reactor types - LWRs that can use only uranium oxide (UOX) fuel, LWRs that can use multiple fuel types (LWR MF), and fast reactors. One could not have, for example, two types of fast reactors concurrently. The new version allows 10 reactor types and any user-defined uranium-plutonium fuel is allowed. (Thorium-based fuels can be input but several features of the model would not work.) The user identifies (by year) the primary fuel to be used for each reactor type. The user can identify for each primary fuel a contingent fuel to use if the primary fuel is not available, e.g., a reactor designated as using mixed oxide fuel (MOX) would have UOX as the contingent fuel. Another example is that a fast reactor using recycled transuranic (TRU) material can be designated as either having or not having appropriately enriched uranium oxide as a contingent fuel. Because of the need to study evolution in recycling and separation strategies, the user can now select the recycling strategy and separation technology, by year.« less

  11. Proceedings of the Augmented VIsual Display (AVID) Research Workshop

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)

    1993-01-01

    The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.

  12. Low Earth Orbit Rendezvous Strategy for Lunar Missions

    NASA Technical Reports Server (NTRS)

    Cates, Grant R.; Cirillo, William M.; Stromgren, Chel

    2006-01-01

    On January 14, 2004 President George W. Bush announced a new Vision for Space Exploration calling for NASA to return humans to the moon. In 2005 NASA decided to use a Low Earth Orbit (LEO) rendezvous strategy for the lunar missions. A Discrete Event Simulation (DES) based model of this strategy was constructed. Results of the model were then used for subsequent analysis to explore the ramifications of the LEO rendezvous strategy.

  13. A parametric duration model of the reaction times of drivers distracted by mobile phone conversations.

    PubMed

    Haque, Md Mazharul; Washington, Simon

    2014-01-01

    The use of mobile phones while driving is more prevalent among young drivers-a less experienced cohort with elevated crash risk. The objective of this study was to examine and better understand the reaction times of young drivers to a traffic event originating in their peripheral vision whilst engaged in a mobile phone conversation. The CARRS-Q advanced driving simulator was used to test a sample of young drivers on various simulated driving tasks, including an event that originated within the driver's peripheral vision, whereby a pedestrian enters a zebra crossing from a sidewalk. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), hands-free and handheld. In addition to driving the simulator each participant completed questionnaires related to driver demographics, driving history, usage of mobile phones while driving, and general mobile phone usage history. The participants were 21-26 years old and split evenly by gender. Drivers' reaction times to a pedestrian in the zebra crossing were modelled using a parametric accelerated failure time (AFT) duration model with a Weibull distribution. Also tested where two different model specifications to account for the structured heterogeneity arising from the repeated measures experimental design. The Weibull AFT model with gamma heterogeneity was found to be the best fitting model and identified four significant variables influencing the reaction times, including phone condition, driver's age, license type (provisional license holder or not), and self-reported frequency of usage of handheld phones while driving. The reaction times of drivers were more than 40% longer in the distracted condition compared to baseline (not distracted). Moreover, the impairment of reaction times due to mobile phone conversations was almost double for provisional compared to open license holders. A reduction in the ability to detect traffic events in the periphery whilst distracted presents a significant and measurable safety concern that will undoubtedly persist unless mitigated. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Comparison of vision through surface modulated and spatial light modulated multifocal optics.

    PubMed

    Vinas, Maria; Dorronsoro, Carlos; Radhakrishnan, Aiswaryah; Benedi-Garcia, Clara; LaVilla, Edward Anthony; Schwiegerling, Jim; Marcos, Susana

    2017-04-01

    Spatial-light-modulators (SLM) are increasingly used as active elements in adaptive optics (AO) systems to simulate optical corrections, in particular multifocal presbyopic corrections. In this study, we compared vision with lathe-manufactured multi-zone (2-4) multifocal, angularly and radially, segmented surfaces and through the same corrections simulated with a SLM in a custom-developed two-active-element AO visual simulator. We found that perceived visual quality measured through real manufactured surfaces and SLM-simulated phase maps corresponded highly. Optical simulations predicted differences in perceived visual quality across different designs at Far distance, but showed some discrepancies at intermediate and near.

  15. Comparison of vision through surface modulated and spatial light modulated multifocal optics

    PubMed Central

    Vinas, Maria; Dorronsoro, Carlos; Radhakrishnan, Aiswaryah; Benedi-Garcia, Clara; LaVilla, Edward Anthony; Schwiegerling, Jim; Marcos, Susana

    2017-01-01

    Spatial-light-modulators (SLM) are increasingly used as active elements in adaptive optics (AO) systems to simulate optical corrections, in particular multifocal presbyopic corrections. In this study, we compared vision with lathe-manufactured multi-zone (2-4) multifocal, angularly and radially, segmented surfaces and through the same corrections simulated with a SLM in a custom-developed two-active-element AO visual simulator. We found that perceived visual quality measured through real manufactured surfaces and SLM-simulated phase maps corresponded highly. Optical simulations predicted differences in perceived visual quality across different designs at Far distance, but showed some discrepancies at intermediate and near. PMID:28736655

  16. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.

    PubMed

    Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu

    2018-01-01

    Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Ureteroscopic skills with and without Roboflex Avicenna in the K-box® simulator.

    PubMed

    Proietti, Silvia; Dragos, Laurian; Emiliani, Esteban; Butticè, Salvatore; Talso, Michele; Baghdadi, Mohammed; Villa, Luca; Doizi, Steeve; Giusti, Guido; Traxer, Olivier

    2017-01-01

    The aim of this study was to evaluate the acquisition of basic ureteroscopic skills with and without Roboflex Avicenna by subjects with no prior surgical training. Ten medical students were divided in two groups: Group 1 was trained with Roboflex Avicenna and Group 2 with flexible ureteroscope alone, using the K-box ® simulator model. Participants were scored on their ability to perform or not two exercises, recording the time. In addition, the participants were evaluated on the quality of their performance for the following parameters: respect of the surrounding environment, flow of the operation, orientation, vision centering and stability. The first exercise was completed only by three and four out of five of students in Group 1 and Group 2, respectively. Stability with the scope was significantly more accurate in the first group compared with the second (P = 0.02). There were no differences in timing, flow or orientation between groups. Although not significant, a tendency of respecting the surrounding tissue and maintaining centered vision was perceived more in the first group. As for the second exercise, there were no differences between groups in regard of orientation, flow, respecting the surrounding tissue, stability or the ability of maintaining centered vision. Although not significant, the second group had a tendency of performing the exercise faster. According to these preliminary results, the acquisition of basic ureteroscopic skills with and without robotic fURS in the K-box ® simulator, by subjects with no prior surgical training, is similar.

  18. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    ERIC Educational Resources Information Center

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  19. An Augmented-Reality Edge Enhancement Application for Google Glass

    PubMed Central

    Hwang, Alex D.; Peli, Eli

    2014-01-01

    Purpose Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer’s real world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Methods Goggle Glass’s camera lens distortions were corrected by using an image warping. Since the camera and virtual display are horizontally separated by 16mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of 3D transformations to minimize parallax errors before the final projection to the Glass’ see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal vision subjects, with and without a diffuser film to simulate vision loss. Results For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera’s performance. The authors assume this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Conclusions Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible, and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration. PMID:24978871

  20. Towards an assistive peripheral visual prosthesis for long-term treatment of retinitis pigmentosa: evaluating mobility performance in immersive simulations

    NASA Astrophysics Data System (ADS)

    Zapf, Marc Patrick H.; Boon, Mei-Ying; Matteucci, Paul B.; Lovell, Nigel H.; Suaning, Gregg J.

    2015-06-01

    Objective. The prospective efficacy of a future peripheral retinal prosthesis complementing residual vision to raise mobility performance in non-end stage retinitis pigmentosa (RP) was evaluated using simulated prosthetic vision (SPV). Approach. Normally sighted volunteers were fitted with a wide-angle head-mounted display and carried out mobility tasks in photorealistic virtual pedestrian scenarios. Circumvention of low-lying obstacles, path following, and navigating around static and moving pedestrians were performed either with central simulated residual vision of 10° alone or enhanced by assistive SPV in the lower and lateral peripheral visual field (VF). Three layouts of assistive vision corresponding to hypothetical electrode array layouts were compared, emphasizing higher visual acuity, a wider visual angle, or eccentricity-dependent acuity across an intermediate angle. Movement speed, task time, distance walked and collisions with the environment were analysed as performance measures. Main results. Circumvention of low-lying obstacles was improved with all tested configurations of assistive SPV. Higher-acuity assistive vision allowed for greatest improvement in walking speeds—14% above that of plain residual vision, while only wide-angle and eccentricity-dependent vision significantly reduced the number of collisions—both by 21%. Navigating around pedestrians, there were significant reductions in collisions with static pedestrians by 33% and task time by 7.7% with the higher-acuity layout. Following a path, higher-acuity assistive vision increased walking speed by 9%, and decreased collisions with stationary cars by 18%. Significance. The ability of assistive peripheral prosthetic vision to improve mobility performance in persons with constricted VFs has been demonstrated. In a prospective peripheral visual prosthesis, electrode array designs need to be carefully tailored to the scope of tasks in which a device aims to assist. We posit that maximum benefit might come from application alongside existing visual aids, to further raise life quality of persons living through the prolonged early stages of RP.

  1. Simulating flow around scaled model of a hypersonic vehicle in wind tunnel

    NASA Astrophysics Data System (ADS)

    Markova, T. V.; Aksenov, A. A.; Zhluktov, S. V.; Savitsky, D. V.; Gavrilov, A. D.; Son, E. E.; Prokhorov, A. N.

    2016-11-01

    A prospective hypersonic HEXAFLY aircraft is considered in the given paper. In order to obtain the aerodynamic characteristics of a new construction design of the aircraft, experiments with a scaled model have been carried out in a wind tunnel under different conditions. The runs have been performed at different angles of attack with and without hydrogen combustion in the scaled propulsion engine. However, the measured physical quantities do not provide all the information about the flowfield. Numerical simulation can complete the experimental data as well as to reduce the number of wind tunnel experiments. Besides that, reliable CFD software can be used for calculations of the aerodynamic characteristics for any possible design of the full-scale aircraft under different operation conditions. The reliability of the numerical predictions must be confirmed in verification study of the software. The given work is aimed at numerical investigation of the flowfield around and inside the scaled model of the HEXAFLY-CIAM module under wind tunnel conditions. A cold run (without combustion) was selected for this study. The calculations are performed in the FlowVision CFD software. The flow characteristics are compared against the available experimental data. The carried out verification study confirms the capability of the FlowVision CFD software to calculate the flows discussed.

  2. Chemical Computer Man: Chemical Agent Response Simulation (CARS). Technical report, January 1983-September 1985

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, E.G.; Mioduszewski, R.J.

    The Chemical Computer Man: Chemical Agent Response Simulation (CARS) is a computer model and simulation program for estimating the dynamic changes in human physiological dysfunction resulting from exposures to chemical-threat nerve agents. The newly developed CARS methodology simulates agent exposure effects on the following five indices of human physiological function: mental, vision, cardio-respiratory, visceral, and limbs. Mathematical models and the application of basic pharmacokinetic principles were incorporated into the simulation so that for each chemical exposure, the relationship between exposure dosage, absorbed dosage (agent blood plasma concentration), and level of physiological response are computed as a function of time. CARS,more » as a simulation tool, is designed for the users with little or no computer-related experience. The model combines maximum flexibility with a comprehensive user-friendly interactive menu-driven system. Users define an exposure problem and obtain immediate results displayed in tabular, graphical, and image formats. CARS has broad scientific and engineering applications, not only in technology for the soldier in the area of Chemical Defense, but also in minimizing animal testing in biomedical and toxicological research and the development of a modeling system for human exposure to hazardous-waste chemicals.« less

  3. Illumination-based synchronization of high-speed vision sensors.

    PubMed

    Hou, Lei; Kagami, Shingo; Hashimoto, Koichi

    2010-01-01

    To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.

  4. Image Understanding Architecture

    DTIC Science & Technology

    1991-09-01

    architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers

  5. Clipping polygon faces through a polyhedron of vision

    NASA Technical Reports Server (NTRS)

    Florence, Judit K. (Inventor); Rohner, Michel A. (Inventor)

    1980-01-01

    A flight simulator combines flight data and polygon face terrain data to provide a CRT display at each window of the simulated aircraft. The data base specifies the relative position of each vertex of each polygon face therein. Only those terrain faces currently appearing within the pyramid of vision defined by the pilots eye and the edges of the pilots window need be displayed at any given time. As the orientation of the pyramid of vision changes in response to flight data, the displayed faces are correspondingly displaced, eventually moving out of the pyramid of vision. Faces which are currently not visible (outside the pyramid of vision) are clipped from the data flow. In addition, faces which are only partially outside of pyramid of vision are reconstructed to eliminate the outside portion. Window coordinates are generated defining the distance between each vertex and each of the boundary planes forming the pyramid of vision. The sign bit of each window coordinate indicates whether the vertex is on the pyramid of vision side of the associated boundary panel (positive), or on the other side thereof (negative). The set of sign bits accompanying each vertex constitute the outcode of that vertex. The outcodes (O.C.) are systematically processed and examined to determine which faces are completely inside the pyramid of vision (Case A--all signs positive), which faces are completely outside (Case C--All signs negative) and which faces must be reconstructed (Case B--both positive and negative signs).

  6. Terrain Portrayal for Synthetic Vision Systems Head-Down Displays Evaluation Results: Compilation of Pilot Transcripts

    NASA Technical Reports Server (NTRS)

    Hughes, Monica F.; Glaab, Louis J.

    2007-01-01

    The Terrain Portrayal for Head-Down Displays (TP-HDD) simulation experiment addressed multiple objectives involving twelve display concepts (two baseline concepts without terrain and ten synthetic vision system (SVS) variations), four evaluation maneuvers (two en route and one approach maneuver, plus a rare-event scenario), and three pilot group classifications. The TP-HDD SVS simulation was conducted in the NASA Langley Research Center's (LaRC's) General Aviation WorkStation (GAWS) facility. The results from this simulation establish the relationship between terrain portrayal fidelity and pilot situation awareness, workload, stress, and performance and are published in the NASA TP entitled Terrain Portrayal for Synthetic Vision Systems Head-Down Displays Evaluation Results. This is a collection of pilot comments during each run of the TP-HDD simulation experiment. These comments are not the full transcripts, but a condensed version where only the salient remarks that applied to the scenario, the maneuver, or the actual research itself were compiled.

  7. A method for identifying color vision deficiency malingering.

    PubMed

    Pouw, Andrew; Karanjia, Rustum; Sadun, Alfredo

    2017-03-01

    To propose a new test to identify color vision deficiency malingering. An online survey was distributed to 130 truly color vision deficient participants and 160 participants willing to simulate color vision deficiency. The survey contained three sets of six color-adjusted versions of the standard Ishihara color plates each, as well as one set of six control plates. The plates that best discriminated both participant groups were selected for a "balanced" test emphasizing both sensitivity and specificity. A "specific" test that prioritized high specificity was also created by selecting from these plates. Statistical measures of the test (sensitivity, specificity, and Youden index) were assessed at each possible cut-off threshold, and a receiver operating characteristic (ROC) function with its area under the curve (AUC) charted. The redshift plate set was identified as having the highest difference of means between groups (-58%, CI: -64 to -52%), as well as the widest gap between group modes. Statistical measures of the "balanced" test show an optimal cut-off of at least two incorrectly identified plates to suggest malingering (Youden index: 0.773, sensitivity: 83.3%, specificity: 94.0%, AUC of ROC 0.918). The "specific" test was able to identify color vision deficiency simulators with a specificity of 100% when using a cut-off of at least two incorrectly identified plates (Youden index 0.599, sensitivity 59.9%, specificity 100%, AUC of ROC 0.881). Our proposed test for identifying color vision deficiency malingering demonstrates a high degree of reliability with AUCs of 0.918 and 0.881 for the "balanced" and "specific" tests, respectively. A cut-off threshold of at least two missed plates on the "specific" test was able to identify color vision deficiency simulators with 100% specificity.

  8. Magician Simulator: A Realistic Simulator for Heterogenous Teams of Autonomous Robots. MAGIC 2010 Challenge

    DTIC Science & Technology

    2011-02-07

    Sensor UGVs (SUGV) or Disruptor UGVs, depending on their payload. The SUGVs included vision, GPS/IMU, and LIDAR systems for identifying and tracking...employed by all the MAGICian research groups. Objects of interest were tracked using standard LIDAR and Computer Vision template-based feature...tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous Locali- zation and Mapping ( SLAM ). Our system contains

  9. Energy modelling in sensor networks

    NASA Astrophysics Data System (ADS)

    Schmidt, D.; Krämer, M.; Kuhn, T.; Wehn, N.

    2007-06-01

    Wireless sensor networks are one of the key enabling technologies for the vision of ambient intelligence. Energy resources for sensor nodes are very scarce. A key challenge is the design of energy efficient communication protocols. Models of the energy consumption are needed to accurately simulate the efficiency of a protocol or application design, and can also be used for automatic energy optimizations in a model driven design process. We propose a novel methodology to create models for sensor nodes based on few simple measurements. In a case study the methodology was used to create models for MICAz nodes. The models were integrated in a simulation environment as well as in a SDL runtime framework of a model driven design process. Measurements on a test application that was created automatically from an SDL specification showed an 80% reduction in energy consumption compared to an implementation without power saving strategies.

  10. Understanding, creating, and managing complex techno-socio-economic systems: Challenges and perspectives

    NASA Astrophysics Data System (ADS)

    Helbing, D.; Balietti, S.; Bishop, S.; Lukowicz, P.

    2011-05-01

    This contribution reflects on the comments of Peter Allen [1], Bikas K. Chakrabarti [2], Péter Érdi [3], Juval Portugali [4], Sorin Solomon [5], and Stefan Thurner [6] on three White Papers (WP) of the EU Support Action Visioneer (www.visioneer.ethz.ch). These White Papers are entitled "From Social Data Mining to Forecasting Socio-Economic Crises" (WP 1) [7], "From Social Simulation to Integrative System Design" (WP 2) [8], and "How to Create an Innovation Accelerator" (WP 3) [9]. In our reflections, the need and feasibility of a "Knowledge Accelerator" is further substantiated by fundamental considerations and recent events around the globe. newpara The Visioneer White Papers propose research to be carried out that will improve our understanding of complex techno-socio-economic systems and their interaction with the environment. Thereby, they aim to stimulate multi-disciplinary collaborations between ICT, the social sciences, and complexity science. Moreover, they suggest combining the potential of massive real-time data, theoretical models, large-scale computer simulations and participatory online platforms. By doing so, it would become possible to explore various futures and to expand the limits of human imagination when it comes to the assessment of the often counter-intuitive behavior of these complex techno-socio-economic-environmental systems. In this contribution, we also highlight the importance of a pluralistic modeling approach and, in particular, the need for a fruitful interaction between quantitative and qualitative research approaches. newpara In an appendix we briefly summarize the concept of the FuturICT flagship project, which will build on and go beyond the proposals made by the Visioneer White Papers. EU flagships are ambitious multi-disciplinary high-risk projects with a duration of at least 10 years amounting to an envisaged overall budget of 1 billion EUR [10]. The goal of the FuturICT flagship initiative is to understand and manage complex, global, socially interactive systems, with a focus on sustainability and resilience.

  11. Image Processing Strategies Based on a Visual Saliency Model for Object Recognition Under Simulated Prosthetic Vision.

    PubMed

    Wang, Jing; Li, Heng; Fu, Weizhen; Chen, Yao; Li, Liming; Lyu, Qing; Han, Tingting; Chai, Xinyu

    2016-01-01

    Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest (ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways--8-4 separated pixelization (8-4 SP) and background edge extraction (BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization (DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  12. Modeling the convergence accommodation of stereo vision for binocular endoscopy.

    PubMed

    Gao, Yuanqian; Li, Jinhua; Li, Jianmin; Wang, Shuxin

    2018-02-01

    The stereo laparoscope is an important tool for achieving depth perception in robot-assisted minimally invasive surgery (MIS). A dynamic convergence accommodation algorithm is proposed to improve the viewing experience and achieve accurate depth perception. Based on the principle of the human vision system, a positional kinematic model of the binocular view system is established. The imaging plane pair is rectified to ensure that the two rectified virtual optical axes intersect at the fixation target to provide immersive depth perception. Stereo disparity was simulated with the roll and pitch movements of the binocular system. The chessboard test and the endoscopic peg transfer task were performed, and the results demonstrated the improved disparity distribution and robustness of the proposed convergence accommodation method with respect to the position of the fixation target. This method offers a new solution for effective depth perception with the stereo laparoscopes used in robot-assisted MIS. Copyright © 2017 John Wiley & Sons, Ltd.

  13. IEEE 1982. Proceedings of the international conference on cybernetics and society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-01-01

    The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.

  14. Distinctive convergence in Australian floral colours seen through the eyes of Australian birds.

    PubMed

    Burd, Martin; Stayton, C Tristan; Shrestha, Mani; Dyer, Adrian G

    2014-04-22

    We used a colour-space model of avian vision to assess whether a distinctive bird pollination syndrome exists for floral colour among Australian angiosperms. We also used a novel phylogenetically based method to assess whether such a syndrome represents a significant degree of convergent evolution. About half of the 80 species in our sample that attract nectarivorous birds had floral colours in a small, isolated region of colour space characterized by an emphasis on long-wavelength reflection. The distinctiveness of this 'red arm' region was much greater when colours were modelled for violet-sensitive (VS) avian vision than for the ultraviolet-sensitive visual system. Honeyeaters (Meliphagidae) are the dominant avian nectarivores in Australia and have VS vision. Ancestral state reconstructions suggest that 31 lineages evolved into the red arm region, whereas simulations indicate that an average of five or six lineages and a maximum of 22 are likely to have entered in the absence of selection. Thus, significant evolutionary convergence on a distinctive floral colour syndrome for bird pollination has occurred in Australia, although only a subset of bird-pollinated taxa belongs to this syndrome. The visual system of honeyeaters has been the apparent driver of this convergence.

  15. A Local Vision on Soil Hydrology (John Dalton Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Roth, K.

    2012-04-01

    After shortly looking back to some research trails of the past decades, and touching on the role of soils in our environmental machinery, a vision on the future of soil hydrology is offered. It is local in the sense of being based on limited experience as well as in the sense of focussing on local spatial scales, from 1 m to 1 km. Cornerstones of this vision are (i) rapid developments of quantitative observation technology, illustrated with the example of ground-penetrating radar (GPR), and (ii) the availability of ever more powerful compute facilities which allow to simulate increasingly complicated model representations in unprecedented detail. Together, they open a powerful and flexible approach to the quantitative understanding of soil hydrology where two lines are fitted: (i) potentially diverse measurements of the system of interest and their analysis and (ii) a comprehensive model representation, including architecture, material properties, forcings, and potentially unknown aspects, together with the same analysis as for (i). This approach pushes traditional inversion to operate on analyses, not on the underlying state variables, and to become flexible with respect to architecture and unknown aspects. The approach will be demonstrated for simple situations at test sites.

  16. Color vision deficiency compensation for Visual Processing Disorder using Hardy-Rand-Rittler test and color transformation

    NASA Astrophysics Data System (ADS)

    Balbin, Jessie R.; Pinugu, Jasmine Nadja J.; Bautista, Joshua Ian C.; Nebres, Pauline D.; Rey Hipolito, Cipriano M.; Santella, Jose Anthony A.

    2017-06-01

    Visual processing skill is used to gather visual information from environment however, there are cases that Visual Processing Disorder (VPD) occurs. The so called visual figure-ground discrimination is a type of VPD where color is one of the factors that contributes on this type. In line with this, color plays a vital role in everyday living, but individuals that have limited and inaccurate color perception suffers from Color Vision Deficiency (CVD) and still not aware on their case. To resolve this case, this study focuses on the design of KULAY, a Head-Mounted Display (HMD) device that can assess whether a user has a CVD or not thru the standard Hardy-Rand-Rittler (HRR) test. This test uses pattern recognition in order to evaluate the user. In addition, color vision deficiency simulation and color correction thru color transformation is also a concern of this research. This will enable people with normal color vision to know how color vision deficient perceives and vice-versa. For the accuracy of the simulated HRR assessment, its results were validated thru an actual assessment done by a doctor. Moreover, for the preciseness of color transformation, Structural Similarity Index Method (SSIM) was used to compare the simulated CVD images and the color corrected images to other reference sources. The output of the simulated HRR assessment and color transformation shows very promising results indicating effectiveness and efficiency of the study. Thus, due to its form factor and portability, this device is beneficial in the field of medicine and technology.

  17. Simulation of glioblastoma multiforme (GBM) tumor cells using ising model on the Creutz Cellular Automaton

    NASA Astrophysics Data System (ADS)

    Züleyha, Artuç; Ziya, Merdan; Selçuk, Yeşiltaş; Kemal, Öztürk M.; Mesut, Tez

    2017-11-01

    Computational models for tumors have difficulties due to complexity of tumor nature and capacities of computational tools, however, these models provide visions to understand interactions between tumor and its micro environment. Moreover computational models have potential to develop strategies for individualized treatments for cancer. To observe a solid brain tumor, glioblastoma multiforme (GBM), we present a two dimensional Ising Model applied on Creutz cellular automaton (CCA). The aim of this study is to analyze avascular spherical solid tumor growth, considering transitions between non tumor cells and cancer cells are like phase transitions in physical system. Ising model on CCA algorithm provides a deterministic approach with discrete time steps and local interactions in position space to view tumor growth as a function of time. Our simulation results are given for fixed tumor radius and they are compatible with theoretical and clinic data.

  18. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    NASA Astrophysics Data System (ADS)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  19. Understanding of and applications for robot vision guidance at KSC

    NASA Technical Reports Server (NTRS)

    Shawaga, Lawrence M.

    1988-01-01

    The primary thrust of robotics at KSC is for the servicing of Space Shuttle remote umbilical docking functions. In order for this to occur, robots performing servicing operations must be capable of tracking a swaying Orbiter in Six Degrees of Freedom (6-DOF). Currently, in NASA KSC's Robotic Applications Development Laboratory (RADL), an ASEA IRB-90 industrial robot is being equipped with a real-time computer vision (hardware and software) system to allow it to track a simulated Orbiter interface (target) in 6-DOF. The real-time computer vision system effectively becomes the eyes for the lab robot, guiding it through a closed loop visual feedback system to move with the simulated Orbiter interface. This paper will address an understanding of this vision guidance system and how it will be applied to remote umbilical servicing at KSC. In addition, other current and future applications will be addressed.

  20. Predicting Visual Disability in Glaucoma With Combinations of Vision Measures.

    PubMed

    Lin, Stephanie; Mihailovic, Aleksandra; West, Sheila K; Johnson, Chris A; Friedman, David S; Kong, Xiangrong; Ramulu, Pradeep Y

    2018-04-01

    We characterized vision in glaucoma using seven visual measures, with the goals of determining the dimensionality of vision, and how many and which visual measures best model activity limitation. We analyzed cross-sectional data from 150 older adults with glaucoma, collecting seven visual measures: integrated visual field (VF) sensitivity, visual acuity, contrast sensitivity (CS), area under the log CS function, color vision, stereoacuity, and visual acuity with noise. Principal component analysis was used to examine the dimensionality of vision. Multivariable regression models using one, two, or three vision tests (and nonvisual predictors) were compared to determine which was best associated with Rasch-analyzed Glaucoma Quality of Life-15 (GQL-15) person measure scores. The participants had a mean age of 70.2 and IVF sensitivity of 26.6 dB, suggesting mild-to-moderate glaucoma. All seven vision measures loaded similarly onto the first principal component (eigenvectors, 0.220-0.442), which explained 56.9% of the variance in vision scores. In models for GQL scores, the maximum adjusted- R 2 values obtained were 0.263, 0.296, and 0.301 when using one, two, and three vision tests in the models, respectively, though several models in each category had similar adjusted- R 2 values. All three of the best-performing models contained CS. Vision in glaucoma is a multidimensional construct that can be described by several variably-correlated vision measures. Measuring more than two vision tests does not substantially improve models for activity limitation. A sufficient description of disability in glaucoma can be obtained using one to two vision tests, especially VF and CS.

  1. Perceptual Performance Impact of GPU-Based WARP and Anti-Aliasing for Image Generators

    DTIC Science & Technology

    2016-06-29

    with the Air Force Research Laboratory (AFRL) and NASA AMES, constructed the Operational Based Vision Assessment (OBVA) simulator. This 15-channel, 150...ABSTRACT In 2012 the U.S. Air Force School of Aerospace Medicine, in partnership with the Air Force Research Laboratory (AFRL) and NASA AMES...with the Air Force Research Laboratory (AFRL) and NASA AMES, constructed the Operational Based Vision Assessment (OBVA) simulator to evaluate the

  2. Experimental results in autonomous landing approaches by dynamic machine vision

    NASA Astrophysics Data System (ADS)

    Dickmanns, Ernst D.; Werner, Stefan; Kraus, S.; Schell, R.

    1994-07-01

    The 4-D approach to dynamic machine vision, exploiting full spatio-temporal models of the process to be controlled, has been applied to on board autonomous landing approaches of aircraft. Aside from image sequence processing, for which it was developed initially, it is also used for data fusion from a range of sensors. By prediction error feedback an internal representation of the aircraft state relative to the runway in 3-D space and time is servo- maintained in the interpretation process, from which the control applications required are being derived. The validity and efficiency of the approach have been proven both in hardware- in-the-loop simulations and in flight experiments with a twin turboprop aircraft Do128 under perturbations from cross winds and wind gusts. The software package has been ported to `C' and onto a new transputer image processing platform; the system has been expanded for bifocal vision with two cameras of different focal length mounted fixed relative to each other on a two-axes platform for viewing direction control.

  3. Autonomous proximity operations using machine vision for trajectory control and pose estimation

    NASA Technical Reports Server (NTRS)

    Cleghorn, Timothy F.; Sternberg, Stanley R.

    1991-01-01

    A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.

  4. CapDEM TD - Modeling and Simulation (Role and Tools) State of the Art Report

    DTIC Science & Technology

    2005-01-01

    office/wcm1/ornclinfn/ciefilnlt rnsnx [55] http://www.idefine.com/Tutorial/TutOiial%20Sales%20Page.htm [56] Gartner , " Magic Quadrant for Business ...21 Figure 3-5: Gartner Magic Quadrant For BPA, 2004 (56...of January 2004 Niche Players Visionaries ------- Completeness of Vision ..., Figure 3-5: Gartner Magic Quadrant For BPA, 2004 [56] Gartner , Inc

  5. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    PubMed

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  6. Central and peripheral vision loss differentially affects contextual cueing in visual search.

    PubMed

    Geringswald, Franziska; Pollmann, Stefan

    2015-09-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental learning of contextual cues or the expression of learning, that is, the guidance of search by learned target-distractor configurations. Visual search with a central scotoma reduced contextual cueing both with respect to search times and gaze parameters. However, when the scotoma was subsequently removed, contextual cueing was observed in a comparable magnitude as for controls who had searched without scotoma simulation throughout the experiment. This indicated that search with a central scotoma did not prevent incidental context learning, but interfered with search guidance by learned contexts. We discuss the role of visuospatial working memory load as source of this interference. In contrast to central vision loss, peripheral vision loss was expected to prevent spatial configuration learning itself, because the restricted search window did not allow the integration of invariant local configurations with the global display layout. This expectation was confirmed in that visual search with a simulated peripheral scotoma eliminated contextual cueing not only in the initial learning phase with scotoma, but also in the subsequent test phase without scotoma. (c) 2015 APA, all rights reserved).

  7. Color Vision Changes and Effects of High Contrast Visor Use at Simulated Cabin Altitudes

    DTIC Science & Technology

    2016-06-08

    under these conditions. Following Institutional Review Board approval, a reduced oxygen breathing device was used to expose subjects with normal...vision, high contrast visor, reduced oxygen breathing device 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF...in further degradation of color vision under these conditions. Following Institutional Review Board approval, a reduced oxygen breathing device was

  8. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  9. Modeling intrinsic electrophysiology of AII amacrine cells: preliminary results.

    PubMed

    Apollo, Nick; Grayden, David B; Burkitt, Anthony N; Meffin, Hamish; Kameneva, Tatiana

    2013-01-01

    In patients who have lost their photoreceptors due to retinal degenerative diseases, it is possible to restore rudimentary vision by electrically stimulating surviving neurons. AII amacrine cells, which reside in the inner plexiform layer, split the signal from rod bipolar cells into ON and OFF cone pathways. As a result, it is of interest to develop a computational model to aid in the understanding of how these cells respond to the electrical stimulation delivered by a prosthetic implant. The aim of this work is to develop and constrain parameters in a single-compartment model of an AII amacrine cell using data from whole-cell patch clamp recordings. This model will be used to explore responses of AII amacrine cells to electrical stimulation. Single-compartment Hodgkin-Huxley-type neural models are simulated in the NEURON environment. Simulations showed successful reproduction of the potassium currentvoltage relationship and some of the spiking properties observed in vitro.

  10. Simulation Test of a Head-Worn Display with Ambient Vision Display for Unusual Attitude Recovery

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis (Trey) J., III; Nicholas, Stephanie N.; Shelton, Kevin J.; Ballard, Kathryn; Prinzel, Lawrence J., III; Ellis, Kyle E.; Bailey, Randall E.; Williams, Steven P.

    2017-01-01

    Head-Worn Displays (HWDs) are envisioned as a possible equivalent to a Head-Up Display (HUD) in commercial and general aviation. A simulation experiment was conducted to evaluate whether the HWD can provide an equivalent or better level of performance to a HUD in terms of unusual attitude recognition and recovery. A prototype HWD was tested with ambient vision capability which were varied (on/off) as an independent variable in the experiment testing for attitude awareness. The simulation experiment was conducted in two parts: 1) short unusual attitude recovery scenarios where the aircraft is placed in an unusual attitude and a single-pilot crew recovered the aircraft; and, 2) a two-pilot crew operating in a realistic flight environment with "off-nominal" events to induce unusual attitudes. The data showed few differences in unusual attitude recognition and recovery performance between the tested head-down, head-up, and head-worn display concepts. The presence and absence of ambient vision stimulation was inconclusive. The ergonomic influences of the head-worn display, necessary to implement the ambient vision experimentation, may have influenced the pilot ratings and acceptance of the concepts.

  11. Vision and Driving

    PubMed Central

    Owsley, Cynthia; McGwin, Gerald

    2010-01-01

    Driving is the primary means of personal travel in many countries and is relies heavily on vision for its successful execution. Research over the past few decades has addressed the role of vision in driver safety (motor vehicle collision involvement) and in driver performance (both on-road and using interactive simulators in the laboratory). Here we critically review what is currently known about the role of various aspects of visual function in driving. We also discuss translational research issues on vision screening for licensure and re-licensure and rehabilitation of visually impaired persons who want to drive. PMID:20580907

  12. Temporal multiplexing with adaptive optics for simultaneous vision

    PubMed Central

    Papadatou, Eleni; Del Águila-Carrasco, Antonio J.; Marín-Franch, Iván; López-Gil, Norberto

    2016-01-01

    We present and test a methodology for generating simultaneous vision with a deformable mirror that changed shape at 50 Hz between two vergences: 0 D (far vision) and −2.5 D (near vision). Different bifocal designs, including toric and combinations of spherical aberration, were simulated and assessed objectively. We found that typical corneal aberrations of a 60-year-old subject changes the shape of objective through-focus curves of a perfect bifocal lens. This methodology can be used to investigate subjective visual performance for different multifocal contact or intraocular lens designs. PMID:27867718

  13. Emergence of a rehabilitation medicine model for low vision service delivery, policy, and funding.

    PubMed

    Stelmack, Joan

    2005-05-01

    A rehabilitation medicine model for low vision rehabilitation is emerging. There have been many challenges to reaching consensus on the roles of each discipline (optometry, ophthalmology, occupational therapy, and vision rehabilitation professionals) in the service delivery model and finding a place in the reimbursement system for all the providers. The history of low vision, legislation associated with Centers for Medicare and Medicaid Services coverage for vision rehabilitation, and research on the effectiveness of low vision service delivery are reviewed. Vision rehabilitation is now covered by Medicare under Physical Medicine and Rehabilitation codes by some Medicare carriers, yet reimbursement is not available for low vision devices or refraction. Also, the role of vision rehabilitation professionals (rehabilitation teachers, orientation and mobility specialists, and low vision therapists) in the model needs to be determined. In a recent systematic review of the scientific literature on the effectiveness of low vision services contracted by the Agency for Health Care Quality Research, no clinical trials were found. The literature consists primarily of longitudinal case studies, which provide weak support for third-party funding for vision rehabilitative services. Providers need to reach consensus on medical necessity, treatment plans, and protocols. Research on low vision outcomes is needed to develop an evidence base to guide clinical practice, policy, and funding decisions.

  14. Nonlinearity analysis of measurement model for vision-based optical navigation system

    NASA Astrophysics Data System (ADS)

    Li, Jianguo; Cui, Hutao; Tian, Yang

    2015-02-01

    In the autonomous optical navigation system based on line-of-sight vector observation, nonlinearity of measurement model is highly correlated with the navigation performance. By quantitatively calculating the degree of nonlinearity of the focal plane model and the unit vector model, this paper focuses on determining which optical measurement model performs better. Firstly, measurement equations and measurement noise statistics of these two line-of-sight measurement models are established based on perspective projection co-linearity equation. Then the nonlinear effects of measurement model on the filter performance are analyzed within the framework of the Extended Kalman filter, also the degrees of nonlinearity of two measurement models are compared using the curvature measure theory from differential geometry. Finally, a simulation of star-tracker-based attitude determination is presented to confirm the superiority of the unit vector measurement model. Simulation results show that the magnitude of curvature nonlinearity measurement is consistent with the filter performance, and the unit vector measurement model yields higher estimation precision and faster convergence properties.

  15. Generation of RGB-D data for SLAM using robotic framework V-REP

    NASA Astrophysics Data System (ADS)

    Gritsenko, Pavel S.; Gritsenko, Igor S.; Seidakhmet, Askar Zh.; Abduraimov, Azizbek E.

    2017-09-01

    In this article, we will present a methodology to debug RGB-D SLAM systems as well as to generate testing data. We have created a model of a laboratory with an area of 250 m2 (25 × 10) with set of objects of different type. V-REP Microsoft Kinect sensor simulation model was used as a basis for robot vision system. Motion path of the sensor model has multiple loops. We have written a program in V-Rep native language Lua to record data array from the Microsoft Kinect sensor model. The array includes both RGB and Depth streams with full resolution (640 × 480) for every 10 cm of the path. The simulated path has absolute accuracy, since it is a simulation, and is represented by an array of transformation matrices (4 × 4). The length of the data array is 1000 steps or 100 m. The path simulates frequently occurring cases in SLAM, including loops. It is worth noting that the path was modeled for a mobile robot and it is represented by a 2D path parallel to the floor at a height of 40 cm.

  16. Technology transfer of operator-in-the-loop simulation

    NASA Technical Reports Server (NTRS)

    Yae, K. H.; Lin, H. C.; Lin, T. C.; Frisch, H. P.

    1994-01-01

    The technology developed for operator-in-the-loop simulation in space teleoperation has been applied to Caterpillar's backhoe, wheel loader, and off-highway truck. On an SGI workstation, the simulation integrates computer modeling of kinematics and dynamics, real-time computational and visualization, and an interface with the operator through the operator's console. The console is interfaced with the workstation through an IBM-PC in which the operator's commands were digitized and sent through an RS-232 serial port. The simulation gave visual feedback adequate for the operator in the loop, with the camera's field of vision projected on a large screen in multiple view windows. The view control can emulate either stationary or moving cameras. This simulator created an innovative engineering design environment by integrating computer software and hardware with the human operator's interactions. The backhoe simulation has been adopted by Caterpillar in building a virtual reality tool for backhoe design.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Jason K.; Jacobson, Jacob J.; Cafferty, Kara G.

    In order to increase the sustainability and security of the nation’s energy supply, the U.S. Department of Energy through its Bioenergy Technology Office has set a vision for one billion tons of biomass to be processed for renewable energy and bioproducts annually by the year 2030. The Renewable Fuels Standard limits the amount of corn grain that can be used in ethanol conversion sold in the U.S, which is already at its maximum. Therefore making the DOE’s vision a reality requires significant growth in the advanced biofuels industry where currently three cellulosic biorefineries convert cellulosic biomass to ethanol. Risk mitigationmore » is central to growing the industry beyond its infancy to a level necessary to achieve the DOE vision. This paper focuses on reducing the supply risk that faces a firm that owns a cellulosic biorefinery. It uses risk theory and simulation modeling to build a risk assessment model based on causal relationships of underlying, uncertain, supply driving variables. Using the model the paper quantifies supply risk reduction achieved by converting the supply chain from a conventional supply system (bales and trucks) to an advanced supply system (depots, pellets, and trains). Results imply that the advanced supply system reduces supply system risk, defined as the probability of a unit cost overrun, from 83% in the conventional system to 4% in the advanced system. Reducing cost risk in this nascent industry improves the odds of realizing desired growth.« less

  18. Simulating the Mind

    NASA Astrophysics Data System (ADS)

    Dietrich, Dietmar; Fodor, Georg; Zucker, Gerhard; Bruckner, Dietmar

    The approach to developing models described within the following chapters breaks with some of the previously used approaches in Artificial Intelligence. This is the first attempt to use methods from psychoanalysis organized in a strictly topdown design method in order to take an important step towards the creation of intelligent systems. Hence, the vision and the research hypothesis are described in the beginning and will hopefully prove to have sufficient grounds for this approach.

  19. Manufacturing at the Nanoscale. Report of the National Nanotechnology Initiative Workshops, 2002-2004

    DTIC Science & Technology

    2007-01-01

    positioning and assembling? • Do nanoscale properties remain once the nanostructures are integrated up to the microscale? • How do we measure...viii Manufacturing at the Nanoscale 1 1. VISION Employing the novel properties and processes that are associated with the nanoscale—in the...Theory, modeling, and simulation software are being developed to investigate nanoscale material properties and synthesis of macromolecular systems with

  20. Vision based control of unmanned aerial vehicles with applications to an autonomous four-rotor helicopter, quadrotor

    NASA Astrophysics Data System (ADS)

    Altug, Erdinc

    Our work proposes a vision-based stabilization and output tracking control method for a model helicopter. This is a part of our effort to produce a rotorcraft based autonomous Unmanned Aerial Vehicle (UAV). Due to the desired maneuvering ability, a four-rotor helicopter has been chosen as the testbed. On previous research on flying vehicles, vision is usually used as a secondary sensor. Unlike previous research, our goal is to use visual feedback as the main sensor, which is not only responsible for detecting where the ground objects are but also for helicopter localization. A novel two-camera method has been introduced for estimating the full six degrees of freedom (DOF) pose of the helicopter. This two-camera system consists of a pan-tilt ground camera and an onboard camera. The pose estimation algorithm is compared through simulation to other methods, such as four-point, and stereo method and is shown to be less sensitive to feature detection errors. Helicopters are highly unstable flying vehicles; although this is good for agility, it makes the control harder. To build an autonomous helicopter, two methods of control are studied---one using a series of mode-based, feedback linearizing controllers and the other using a back-stepping control law. Various simulations with 2D and 3D models demonstrate the implementation of these controllers. We also show global convergence of the 3D quadrotor controller even with large calibration errors or presence of large errors on the image plane. Finally, we present initial flight experiments where the proposed pose estimation algorithm and non-linear control techniques have been implemented on a remote-controlled helicopter. The helicopter was restricted with a tether to vertical, yaw motions and limited x and y translations.

  1. Aural-Nondetectability Model Predictions for Night-Vision Goggles across Ambient Lighting Conditions

    DTIC Science & Technology

    2015-12-01

    ARL-TR-7564 ● DEC 2015 US Army Research Laboratory Aural-Nondetectability Model Predictions for Night -Vision Goggles across...ARL-TR-7564 ● DEC 2015 US Army Research Laboratory Aural-Nondetectability Model Predictions for Night -Vision Goggles across Ambient...May 2015–30 Sep 2015 4. TITLE AND SUBTITLE Aural-Nondetectability Model Predictions for Night -Vision Goggles across Ambient Lighting Conditions 5a

  2. A Standalone Vision Impairments Simulator for Java Swing Applications

    NASA Astrophysics Data System (ADS)

    Oikonomou, Theofanis; Votis, Konstantinos; Korn, Peter; Tzovaras, Dimitrios; Likothanasis, Spriridon

    A lot of work has been done lately in an attempt to assess accessibility. For the case of web rich-client applications several tools exist that simulate how a vision impaired or colour-blind person would perceive this content. In this work we propose a simulation tool for non-web JavaTM Swing applications. Developers and designers face a real challenge when creating software that has to cope with a lot of interaction situations, as well as specific directives for ensuring an accessible interaction. The proposed standalone tool will assist them to explore user-centered design and important accessibility issues for their JavaTM Swing implementations.

  3. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve.

    PubMed

    Li, Yi; Chen, Yuren

    2016-12-30

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.

  4. Stereoscopy for visual simulation of materials of complex appearance

    NASA Astrophysics Data System (ADS)

    da Graça, Fernando; Paljic, Alexis; Lafon-Pham, Dominique; Callet, Patrick

    2014-03-01

    The present work studies the role of stereoscopy on perceived surface aspect of computer generated complex materials. The objective is to investigate if, and how, the additional information conveyed by the binocular vision affects the observer judgment on the evaluation of flake density in an effect paint simulation. We have set up a heuristic flake model with a Voronoi: modelization of flakes. The model was implemented in our rendering engine using global illumination, ray tracing, with an off axis-frustum method for the calculation of stereo images. We conducted a user study based on a flake density discrimination task to determine perception thresholds (JNDs). Results show that stereoscopy slightly improves density perception. We propose an analysis methodology based on granulometry. This allows for a discussion of the results on the basis of scales of observation.

  5. A position and attitude vision measurement system for wind tunnel slender model

    NASA Astrophysics Data System (ADS)

    Cheng, Lei; Yang, Yinong; Xue, Bindang; Zhou, Fugen; Bai, Xiangzhi

    2014-11-01

    A position and attitude vision measurement system for drop test slender model in wind tunnel is designed and developed. The system used two high speed cameras, one is put to the side of the model and another is put to the position where the camera can look up the model. Simple symbols are set on the model. The main idea of the system is based on image matching technique between the 3D-digital model projection image and the image captured by the camera. At first, we evaluate the pitch angles, the roll angles and the position of the centroid of a model through recognizing symbols in the images captured by the side camera. And then, based on the evaluated attitude info, giving a series of yaw angles, a series of projection images of the 3D-digital model are obtained. Finally, these projection images are matched with the image which captured by the looking up camera, and the best match's projection images corresponds to the yaw angle is the very yaw angle of the model. Simulation experiments are conducted and the results show that the maximal error of attitude measurement is less than 0.05°, which can meet the demand of test in wind tunnel.

  6. A prospective comparison of phakic collamer lenses and wavefront-optimized laser-assisted in situ keratomileusis for correction of myopia

    PubMed Central

    Parkhurst, Gregory D

    2016-01-01

    Purpose The aim of this study was to evaluate and compare night vision and low-luminance contrast sensitivity (CS) in patients undergoing implantation of phakic collamer lenses or wavefront-optimized laser-assisted in situ keratomileusis (LASIK). Patients and methods This is a nonrandomized, prospective study, in which 48 military personnel were recruited. Rabin Super Vision Test was used to compare the visual acuity and CS of Visian implantable collamer lens (ICL) and LASIK groups under normal and low light conditions, using a filter for simulated vision through night vision goggles. Results Preoperative mean spherical equivalent was −6.10 D in the ICL group and −6.04 D in the LASIK group (P=0.863). Three months postoperatively, super vision acuity (SVa), super vision acuity with (low-luminance) goggles (SVaG), super vision contrast (SVc), and super vision contrast with (low luminance) goggles (SVcG) significantly improved in the ICL and LASIK groups (P<0.001). Mean improvement in SVaG at 3 months postoperatively was statistically significantly greater in the ICL group than in the LASIK group (mean change [logarithm of the minimum angle of resolution, LogMAR]: ICL =−0.134, LASIK =−0.085; P=0.032). Mean improvements in SVc and SVcG were also statistically significantly greater in the ICL group than in the LASIK group (SVc mean change [logarithm of the CS, LogCS]: ICL =0.356, LASIK =0.209; P=0.018 and SVcG mean change [LogCS]: ICL =0.390, LASIK =0.259; P=0.024). Mean improvement in SVa at 3 months was comparable in both groups (P=0.154). Conclusion Simulated night vision improved with both ICL implantation and wavefront-optimized LASIK, but improvements were significantly greater with ICLs. These differences may be important in a military setting and may also affect satisfaction with civilian vision correction. PMID:27418804

  7. A prospective comparison of phakic collamer lenses and wavefront-optimized laser-assisted in situ keratomileusis for correction of myopia.

    PubMed

    Parkhurst, Gregory D

    2016-01-01

    The aim of this study was to evaluate and compare night vision and low-luminance contrast sensitivity (CS) in patients undergoing implantation of phakic collamer lenses or wavefront-optimized laser-assisted in situ keratomileusis (LASIK). This is a nonrandomized, prospective study, in which 48 military personnel were recruited. Rabin Super Vision Test was used to compare the visual acuity and CS of Visian implantable collamer lens (ICL) and LASIK groups under normal and low light conditions, using a filter for simulated vision through night vision goggles. Preoperative mean spherical equivalent was -6.10 D in the ICL group and -6.04 D in the LASIK group (P=0.863). Three months postoperatively, super vision acuity (SVa), super vision acuity with (low-luminance) goggles (SVaG), super vision contrast (SVc), and super vision contrast with (low luminance) goggles (SVcG) significantly improved in the ICL and LASIK groups (P<0.001). Mean improvement in SVaG at 3 months postoperatively was statistically significantly greater in the ICL group than in the LASIK group (mean change [logarithm of the minimum angle of resolution, LogMAR]: ICL =-0.134, LASIK =-0.085; P=0.032). Mean improvements in SVc and SVcG were also statistically significantly greater in the ICL group than in the LASIK group (SVc mean change [logarithm of the CS, LogCS]: ICL =0.356, LASIK =0.209; P=0.018 and SVcG mean change [LogCS]: ICL =0.390, LASIK =0.259; P=0.024). Mean improvement in SVa at 3 months was comparable in both groups (P=0.154). Simulated night vision improved with both ICL implantation and wavefront-optimized LASIK, but improvements were significantly greater with ICLs. These differences may be important in a military setting and may also affect satisfaction with civilian vision correction.

  8. [Comparison of the Pressure on the Larynx and Tongue Using McGRATH® MAC Video Laryngoscope--Direct Vision versus Indirect Vision].

    PubMed

    Tanaka, Yasutomo; Miyazaki, Yukiko; Kitakata, Hidenori; Shibuya, Hiromi; Okada, Toshiki

    2015-12-01

    Studies show that McGRATH® MAC (McG) is useful during direct laryngoscopy. However, no study has examined whether McG re- duces pressure on the upper airway tract We compared direct vision with indirect vision concerning pressure on the larynx and tongue. Twenty two anesthesiologists and 16 junior residents attempted direct laryngoscopy of airway management simulator using McG with direct vision and indirect vision. Pressure was measured using pressure measurement film. In anesthesiologists group, pressure on larynx was 14.8 ± 2.7 kgf · cm(-2) with direct vision and 12.7 ± 2.7 kgf · cm(-2) with indirect vision (P < 0.05). Pressure on the tongue was 8.8 ± 3.2 kgf cm(-2) with direct vision and 7.6 ± 2.8 kgf · cm(-2) with indirect vision (P = 0.18). In junior residents group, pressure on larynx was 19.0 ± 1.3 kgf · cm(-2) with direct vision and 14.1 ± 3.1 kgf · cm(-2) with indirect vision (P < 0.05). Pressure on the tongue was 15.4 ± 3.6 kgf · cm(-2) with direct vision and 11.2 ± 4.7 kgf · cm(-2) with indirect vision (P < 0.05). McG with indirect vision can reduce pressure on the upper airway tract.

  9. What aspects of vision facilitate haptic processing?

    PubMed

    Millar, Susanna; Al-Attar, Zainab

    2005-12-01

    We investigate how vision affects haptic performance when task-relevant visual cues are reduced or excluded. The task was to remember the spatial location of six landmarks that were explored by touch in a tactile map. Here, we use specially designed spectacles that simulate residual peripheral vision, tunnel vision, diffuse light perception, and total blindness. Results for target locations differed, suggesting additional effects from adjacent touch cues. These are discussed. Touch with full vision was most accurate, as expected. Peripheral and tunnel vision, which reduce visuo-spatial cues, differed in error pattern. Both were less accurate than full vision, and significantly more accurate than touch with diffuse light perception, and touch alone. The important finding was that touch with diffuse light perception, which excludes spatial cues, did not differ from touch without vision in performance accuracy, nor in location error pattern. The contrast between spatially relevant versus spatially irrelevant vision provides new, rather decisive, evidence against the hypothesis that vision affects haptic processing even if it does not add task-relevant information. The results support optimal integration theories, and suggest that spatial and non-spatial aspects of vision need explicit distinction in bimodal studies and theories of spatial integration.

  10. Heading assessment by “tunnel vision” patients and control subjects standing or walking in a virtual reality environment

    PubMed Central

    APFELBAUM, HENRY; PELAH, ADAR; PELI, ELI

    2007-01-01

    Virtual reality locomotion simulators are a promising tool for evaluating the effectiveness of vision aids to mobility for people with low vision. This study examined two factors to gain insight into the verisimilitude requirements of the test environment: the effects of treadmill walking and the suitability of using controls as surrogate patients. Ten “tunnel vision” patients with retinitis pigmentosa (RP) were tasked with identifying which side of a clearly visible obstacle their heading through the virtual environment would lead them, and were scored both on accuracy and on their distance from the obstacle when they responded. They were tested both while walking on a treadmill and while standing, as they viewed a scene representing progress through a shopping mall. Control subjects, each wearing a head-mounted field restriction to simulate the vision of a paired patient, were also tested. At wide angles of approach, controls and patients performed with a comparably high degree of accuracy, and made their choices at comparable distances from the obstacle. At narrow angles of approach, patients’ accuracy increased when walking, while controls’ accuracy decreased. When walking, both patients and controls delayed their decisions until closer to the obstacle. We conclude that a head-mounted field restriction is not sufficient for simulating tunnel vision, but that the improved performance observed for walking compared to standing suggests that a walking interface (such as a treadmill) may be essential for eliciting natural perceptually-guided behavior in virtual reality locomotion simulators. PMID:18167511

  11. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  12. A feedback model of visual attention.

    PubMed

    Spratling, M W; Johnson, M H

    2004-03-01

    Feedback connections are a prominent feature of cortical anatomy and are likely to have a significant functional role in neural information processing. We present a neural network model of cortical feedback that successfully simulates neurophysiological data associated with attention. In this domain, our model can be considered a more detailed, and biologically plausible, implementation of the biased competition model of attention. However, our model is more general as it can also explain a variety of other top-down processes in vision, such as figure/ground segmentation and contextual cueing. This model thus suggests that a common mechanism, involving cortical feedback pathways, is responsible for a range of phenomena and provides a unified account of currently disparate areas of research.

  13. A Synthetic Vision Preliminary Integrated Safety Analysis

    NASA Technical Reports Server (NTRS)

    Hemm, Robert; Houser, Scott

    2001-01-01

    This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.

  14. Robust Kalman filtering cooperated Elman neural network learning for vision-sensing-based robotic manipulation with global stability.

    PubMed

    Zhong, Xungao; Zhong, Xunyu; Peng, Xiafu

    2013-10-08

    In this paper, a global-state-space visual servoing scheme is proposed for uncalibrated model-independent robotic manipulation. The scheme is based on robust Kalman filtering (KF), in conjunction with Elman neural network (ENN) learning techniques. The global map relationship between the vision space and the robotic workspace is learned using an ENN. This learned mapping is shown to be an approximate estimate of the Jacobian in global space. In the testing phase, the desired Jacobian is arrived at using a robust KF to improve the ENN learning result so as to achieve robotic precise convergence of the desired pose. Meanwhile, the ENN weights are updated (re-trained) using a new input-output data pair vector (obtained from the KF cycle) to ensure robot global stability manipulation. Thus, our method, without requiring either camera or model parameters, avoids the corrupted performances caused by camera calibration and modeling errors. To demonstrate the proposed scheme's performance, various simulation and experimental results have been presented using a six-degree-of-freedom robotic manipulator with eye-in-hand configurations.

  15. A vision-based system for measuring the displacements of large structures: Simultaneous adaptive calibration and full motion estimation

    NASA Astrophysics Data System (ADS)

    Santos, C. Almeida; Costa, C. Oliveira; Batista, J.

    2016-05-01

    The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.

  16. Individual Colorimetric Observer Model

    PubMed Central

    Asano, Yuta; Fairchild, Mark D.; Blondé, Laurent

    2016-01-01

    This study proposes a vision model for individual colorimetric observers. The proposed model can be beneficial in many color-critical applications such as color grading and soft proofing to assess ranges of color matches instead of a single average match. We extended the CIE 2006 physiological observer by adding eight additional physiological parameters to model individual color-normal observers. These eight parameters control lens pigment density, macular pigment density, optical densities of L-, M-, and S-cone photopigments, and λmax shifts of L-, M-, and S-cone photopigments. By identifying the variability of each physiological parameter, the model can simulate color matching functions among color-normal populations using Monte Carlo simulation. The variabilities of the eight parameters were identified through two steps. In the first step, extensive reviews of past studies were performed for each of the eight physiological parameters. In the second step, the obtained variabilities were scaled to fit a color matching dataset. The model was validated using three different datasets: traditional color matching, applied color matching, and Rayleigh matches. PMID:26862905

  17. Effects of color vision deficiency on detection of color-highlighted targets in a simulated air traffic control display.

    DOT National Transportation Integrated Search

    1992-01-01

    The present study sought to evaluate the effects of color vision deficiency on the gain in conspicuity that is realized when color-highlighting is added as a redundant cue to indicate the presence of unexpected, nontracked aircraft intruding in contr...

  18. The Malcolm horizon: History and future

    NASA Technical Reports Server (NTRS)

    Malcolm, R.

    1984-01-01

    The development of the Malcolm Horizon, a peripheral vision horizon used in flight simulation, is discussed. A history of the horizon display is presented as well as a brief overview of vision physiology, and the role balance plays is spatial orientation. Avenues of continued research in subconscious cockpit instrumentation are examined.

  19. 2013-2363

    NASA Image and Video Library

    2013-05-15

    (left to right) NASA Langley aerospace engineer Bruce Jackson briefs astronauts Rex Walheim and Gregory Johnson about the Synthetic Vision (SV) and Enhanced Vision (EV) systems in a flight simulator at the center's Cockpit Motion Facility. The astronauts were training to land the Dream Chaser spacecraft May 15th 2013. credit NASA/David C. Bowman

  20. Low-Latency Embedded Vision Processor (LLEVS)

    DTIC Science & Technology

    2016-03-01

    26 3.2.3 Task 3 Projected Performance Analysis of FPGA- based Vision Processor ........... 31 3.2.3.1 Algorithms Latency Analysis ...Programmable Gate Array Custom Hardware for Real- Time Multiresolution Analysis . ............................................... 35...conduct data analysis for performance projections. The data acquired through measurements , simulation and estimation provide the requisite platform for

  1. Task-focused modeling in automated agriculture

    NASA Astrophysics Data System (ADS)

    Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack

    1993-01-01

    Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.

  2. Thalamocortical dynamics of the McCollough effect: boundary-surface alignment through perceptual learning.

    PubMed

    Grossberg, Stephen; Hwang, Seungwoo; Mingolla, Ennio

    2002-05-01

    This article further develops the FACADE neural model of 3-D vision and figure-ground perception to quantitatively explain properties of the McCollough effect (ME). The model proposes that many ME data result from visual system mechanisms whose primary function is to adaptively align, through learning, boundary and surface representations that are positionally shifted due to the process of binocular fusion. For example, binocular boundary representations are shifted by binocular fusion relative to monocular surface representations, yet the boundaries must become positionally aligned with the surfaces to control binocular surface capture and filling-in. The model also includes perceptual reset mechanisms that use habituative transmitters in opponent processing circuits. Thus the model shows how ME data may arise from a combination of mechanisms that have a clear functional role in biological vision. Simulation results with a single set of parameters quantitatively fit data from 13 experiments that probe the nature of achromatic/chromatic and monocular/binocular interactions during induction of the ME. The model proposes how perceptual learning, opponent processing, and habituation at both monocular and binocular surface representations are involved, including early thalamocortical sites. In particular, it explains the anomalous ME utilizing these multiple processing sites. Alternative models of the ME are also summarized and compared with the present model.

  3. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  4. Multiple Optical Filter Design Simulation Results

    NASA Astrophysics Data System (ADS)

    Mendelsohn, J.; Englund, D. C.

    1986-10-01

    In this paper we continue our investigation of the application of matched filters to robotic vision problems. Specifically, we are concerned with the tray-picking problem. Our principal interest in this paper is the examination of summation affects which arise from attempting to reduce the matched filter memory size by averaging of matched filters. While the implementation of matched filtering theory to applications in pattern recognition or machine vision is ideally through the use of optics and optical correlators, in this paper the results were obtained through a digital simulation of the optical process.

  5. The effects of simulated vision impairments on the cone of gaze.

    PubMed

    Hecht, Heiko; Hörichs, Jenny; Sheldon, Sarah; Quint, Jessilin; Bowers, Alex

    2015-10-01

    Detecting the gaze direction of others is critical for many social interactions. We explored factors that may make the perception of mutual gaze more difficult, including the degradation of the stimulus and simulated vision impairment. To what extent do these factors affect the complex assessment of mutual gaze? Using an interactive virtual head whose eye direction could be manipulated by the subject, we conducted two experiments to assess the effects of simulated vision impairments on mutual gaze. Healthy subjects had to demarcate the center and the edges of the cone of gaze-that is, the range of gaze directions that are accepted for mutual gaze. When vision was impaired by adding a semitransparent white contrast reduction mask to the display (Exp. 1), judgments became more variable and more influenced by the head direction (indicative of a compensation strategy). When refractive blur was added (Exp. 1), the gaze cone shrank from 12.9° (no blur) to 11.3° (3-diopter lens), which cannot be explained by a low-level process but might reflect a tightening of the criterion for mutual gaze as a response to the increased uncertainty. However, the overall effects of the impairments were relatively modest. Elderly subjects (Exp. 2) produced more variability but did not differ qualitatively from the younger subjects. In the face of artificial vision impairments, compensation mechanisms and criterion changes allow us to perform better in mutual gaze perception than would be predicted by a simple extrapolation from the losses in basic visual acuity and contrast sensitivity.

  6. A conceptual model for vision rehabilitation

    PubMed Central

    Roberts, Pamela S.; Rizzo, John-Ross; Hreha, Kimberly; Wertheimer, Jeffrey; Kaldenberg, Jennifer; Hironaka, Dawn; Riggs, Richard; Colenbrander, August

    2017-01-01

    Vision impairments are highly prevalent after acquired brain injury (ABI). Conceptual models that focus on constructing intellectual frameworks greatly facilitate comprehension and implementation of practice guidelines in an interprofessional setting. The purpose of this article is to provide a review of the vision literature in ABI, describe a conceptual model for vision rehabilitation, explain its potential clinical inferences, and discuss its translation into rehabilitation across multiple practice settings and disciplines. PMID:27997671

  7. A conceptual model for vision rehabilitation.

    PubMed

    Roberts, Pamela S; Rizzo, John-Ross; Hreha, Kimberly; Wertheimer, Jeffrey; Kaldenberg, Jennifer; Hironaka, Dawn; Riggs, Richard; Colenbrander, August

    2016-01-01

    Vision impairments are highly prevalent after acquired brain injury (ABI). Conceptual models that focus on constructing intellectual frameworks greatly facilitate comprehension and implementation of practice guidelines in an interprofessional setting. The purpose of this article is to provide a review of the vision literature in ABI, describe a conceptual model for vision rehabilitation, explain its potential clinical inferences, and discuss its translation into rehabilitation across multiple practice settings and disciplines.

  8. Spatial multibody modeling and vehicle dynamics analysis of advanced vehicle technologies

    NASA Astrophysics Data System (ADS)

    Letherwood, Michael D.; Gunter, David D.; Gorsich, David J.; Udvare, Thomas B.

    2004-08-01

    The US Army vision, announced in October of 1999, encompasses people, readiness, and transformation. The goal of the Army vision is to transition the entire Army into a force that is strategically responsive and dominant at every point of the spectrum of operations. The transformation component will be accomplished in three ways: the Objective Force, the Legacy (current) Force, and the Interim Force. The objective force is not platform driven, but rather the focus is on achieving capabilities that will operate as a "system of systems." As part of the Objective Force, the US Army plans to begin production of the Future Combat System (FCS) in FY08 and field the first unit by FY10 as currently defined in the FCS solicitation(1). As part of the FCS program, the Future Tactical Truck System (FTTS) encompasses all US Army tactical wheeled vehicles and its initial efforts will focus only on the heavy class. The National Automotive Center (NAC) is using modeling and simulation to demonstrate the feasibility and operational potential of advanced commercial and military technologies with application to new and existing tactical vehicles and to describe potential future vehicle capabilities. This document will present the results of computer-based, vehicle dynamics performance assessments of FTTS concepts with such features as hybrid power sources, active suspensions, skid steering, and in-hub electric drive motors. Fully three-dimensional FTTS models are being created using commercially available modeling and simulation methodologies such as ADAMS and DADS and limited vehicle dynamics validation studies are will be performed.

  9. Can surgical simulation be used to train detection and classification of neural networks?

    PubMed

    Zisimopoulos, Odysseas; Flouty, Evangello; Stacey, Mark; Muscroft, Sam; Giataganas, Petros; Nehme, Jean; Chow, Andre; Stoyanov, Danail

    2017-10-01

    Computer-assisted interventions (CAI) aim to increase the effectiveness, precision and repeatability of procedures to improve surgical outcomes. The presence and motion of surgical tools is a key information input for CAI surgical phase recognition algorithms. Vision-based tool detection and recognition approaches are an attractive solution and can be designed to take advantage of the powerful deep learning paradigm that is rapidly advancing image recognition and classification. The challenge for such algorithms is the availability and quality of labelled data used for training. In this Letter, surgical simulation is used to train tool detection and segmentation based on deep convolutional neural networks and generative adversarial networks. The authors experiment with two network architectures for image segmentation in tool classes commonly encountered during cataract surgery. A commercially-available simulator is used to create a simulated cataract dataset for training models prior to performing transfer learning on real surgical data. To the best of authors' knowledge, this is the first attempt to train deep learning models for surgical instrument detection on simulated data while demonstrating promising results to generalise on real data. Results indicate that simulated data does have some potential for training advanced classification methods for CAI systems.

  10. Digital imaging and remote sensing image generator (DIRSIG) as applied to NVESD sensor performance modeling

    NASA Astrophysics Data System (ADS)

    Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.

    2016-05-01

    The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.

  11. DYNAMICO, an atmospheric dynamical core for high-performance climate modeling

    NASA Astrophysics Data System (ADS)

    Dubos, Thomas; Meurdesoif, Yann; Spiga, Aymeric; Millour, Ehouarn; Fita, Lluis; Hourdin, Frédéric; Kageyama, Masa; Traore, Abdoul-Khadre; Guerlet, Sandrine; Polcher, Jan

    2017-04-01

    Institut Pierre Simon Laplace has developed a very scalable atmospheric dynamical core, DYNAMICO, based on energy-conserving finite-difference/finite volume numerics on a quasi-uniform icosahedral-hexagonal mesh. Scalability is achieved by combining hybrid MPI/OpenMP parallelism to asynchronous I/O. This dynamical core has been coupled to radiative transfer physics tailored to the atmosphere of Saturn, allowing unprecedented simulations of the climate of this giant planet. For terrestrial climate studies DYNAMICO is being integrated into the IPSL Earth System Model IPSL-CM. Preliminary aquaplanet and AMIP-style simulations yield reasonable results when compared to outputs from IPSL-CM5. The observed performance suggests that an order of magnitude may be gained with respect to IPSL-CM CMIP5 simulations either on the duration of simulations or on their resolution. Longer simulations would be of interest for the study of paleoclimate, while higher resolution could improve certain aspects of the modeled climate such as extreme events, as will be explored in the HighResMIP project. Following IPSL's strategic vision of building a unified global-regional modelling system, a fully-compressible, non-hydrostatic prototype of DYNAMICO has been developed, enabling future convection-resolving simulations. Work supported by ANR project "HEAT", grant number CE23_2014_HEAT Dubos, T., Dubey, S., Tort, M., Mittal, R., Meurdesoif, Y., and Hourdin, F.: DYNAMICO-1.0, an icosahedral hydrostatic dynamical core designed for consistency and versatility, Geosci. Model Dev., 8, 3131-3150, doi:10.5194/gmd-8-3131-2015, 2015.

  12. Software Simulates Sight: Flat Panel Mura Detection

    NASA Technical Reports Server (NTRS)

    2008-01-01

    In the increasingly sophisticated world of high-definition flat screen monitors and television screens, image clarity and the elimination of distortion are paramount concerns. As the devices that reproduce images become more and more sophisticated, so do the technologies that verify their accuracy. By simulating the manner in which a human eye perceives and interprets a visual stimulus, NASA scientists have found ways to automatically and accurately test new monitors and displays. The Spatial Standard Observer (SSO) software metric, developed by Dr. Andrew B. Watson at Ames Research Center, measures visibility and defects in screens, displays, and interfaces. In the design of such a software tool, a central challenge is determining which aspects of visual function to include while accuracy and generality are important, relative simplicity of the software module is also a key virtue. Based on data collected in ModelFest, a large cooperative multi-lab project hosted by the Optical Society of America, the SSO simulates a simplified model of human spatial vision, operating on a pair of images that are viewed at a specific viewing distance with pixels having a known relation to luminance. The SSO measures the visibility of foveal spatial patterns, or the discriminability of two patterns, by incorporating only a few essential components of vision. These components include local contrast transformation, a contrast sensitivity function, local masking, and local pooling. By this construction, the SSO provides output in units of "just noticeable differences" (JND) a unit of measure based on the assumed smallest difference of sensory input detectable by a human being. Herein is the truly amazing ability of the SSO, while conventional methods can manipulate images, the SSO models human perception. This set of equations actually defines a mathematical way of working with an image that accurately reflects the way in which the human eye and mind behold a stimulus. The SSO is intended for a wide variety of applications, such as evaluating vision from unmanned aerial vehicles, measuring visibility of damage to aircraft and to the space shuttles, predicting outcomes of corrective laser eye surgery, inspecting displays during the manufacturing process, estimating the quality of compressed digital video, evaluating legibility of text, and predicting discriminability of icons or symbols in a graphical user interface.

  13. Self calibration of the stereo vision system of the Chang'e-3 lunar rover based on the bundle block adjustment

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan

    2017-06-01

    The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the stereo vision system of the Chang'e-3 lunar rover on the moon.

  14. Quantifying Supply Risk at a Cellulosic Biorefinery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Jason K; Jacobson, Jacob Jordan; Cafferty, Kara Grace

    In order to increase the sustainability and security of the nation’s energy supply, the U.S. Department of Energy through its Bioenergy Technology Office has set a vision for one billion tons of biomass to be processed for renewable energy and bioproducts annually by the year 2030. The Renewable Fuels Standard limits the amount of corn grain that can be used in ethanol conversion sold in the U.S, which is already at its maximum. Therefore making the DOE’s vision a reality requires significant growth in the advanced biofuels industry where currently three cellulosic biorefineries convert cellulosic biomass to ethanol. Risk mitigationmore » is central to growing the industry beyond its infancy to a level necessary to achieve the DOE vision. This paper focuses on reducing the supply risk that faces a firm that owns a cellulosic biorefinery. It uses risk theory and simulation modeling to build a risk assessment model based on causal relationships of underlying, uncertain, supply driving variables. Using the model the paper quantifies supply risk reduction achieved by converting the supply chain from a conventional supply system (bales and trucks) to an advanced supply system (depots, pellets, and trains). Results imply that the advanced supply system reduces supply system risk, defined as the probability of a unit cost overrun, from 83% in the conventional system to 4% in the advanced system. Reducing cost risk in this nascent industry improves the odds of realizing desired growth.« less

  15. A trans-phase granular continuum relation and its use in simulation

    NASA Astrophysics Data System (ADS)

    Kamrin, Ken; Dunatunga, Sachith; Askari, Hesam

    The ability to model a large granular system as a continuum would offer tremendous benefits in computation time compared to discrete particle methods. However, two infamous problems arise in the pursuit of this vision: (i) the constitutive relation for granular materials is still unclear and hotly debated, and (ii) a model and corresponding numerical method must wear ``many hats'' as, in general circumstances, it must be able to capture and accurately represent the material as it crosses through its collisional, dense-flowing, and solid-like states. Here we present a minimal trans-phase model, merging an elastic response beneath a fictional yield criterion, a mu(I) rheology for liquid-like flow above the static yield criterion, and a disconnection rule to model separation of the grains into a low-temperature gas. We simulate our model with a meshless method (in high strain/mixing cases) and the finite-element method. It is able to match experimental data in many geometries, including collapsing columns, impact on granular beds, draining silos, and granular drag problems.

  16. Pyramidal neurovision architecture for vision machines

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1993-08-01

    The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.

  17. Grid Integration Webinars | Energy Systems Integration Facility | NREL

    Science.gov Websites

    Vision Future. The study used detailed nodal simulations of the Western Interconnection system with greater than 35% wind energy, based on scenarios from the DOE Wind Vision study to assess the operability Renewable Energy Integration in California April 14, 2016 Greg Brinkman discussed the Low Carbon Grid Study

  18. Rehabilitation of Visual and Perceptual Dysfunction After Severe Traumatic Brain Injury

    DTIC Science & Technology

    2012-03-26

    about this amount. 10 C. Collision judgments in  virtual  mall walking simulator The virtual mall is a virtual reality model of a real shopping...expanded vision from the prisms (Figure 5b). Figure 4. Illustration of the virtual reality mall set-up and collision judgment task. Participants...1 AD_________________ Award Number: W81XWH-11-2-0082 TITLE: Rehabilitation of Visual and Perceptual Dysfunction after Severe

  19. Vision, Leadership, and Change: The Case of Ramah Summer Camps

    ERIC Educational Resources Information Center

    Reimer, Joseph

    2010-01-01

    In his retrospective essay, Seymour Fox (1997) identified "vision" as the essential element that shaped the Ramah camp system. I will take a critical look at Fox's main claims: (1) A particular model of vision was essential to the development of Camp Ramah; and (2) That model of vision should guide contemporary Jewish educators in creating Jewish…

  20. Two-dimensional quasistatic stationary short range surface plasmons in flat nanoprisms.

    PubMed

    Nelayah, J; Kociak, M; Stéphan, O; Geuquet, N; Henrard, L; García de Abajo, F J; Pastoriza-Santos, I; Liz-Marzán, L M; Colliex, C

    2010-03-10

    We report on the nanometer scale spectral imaging of surface plasmons within individual silver triangular nanoprisms by electron energy loss spectroscopy and on related discrete dipole approximation simulations. A dependence of the energy and intensity of the three detected modes as function of the edge length is clearly identified both experimentally and with simulations. We show that for experimentally available prisms (edge lengths ca. 70 to 300 nm) the energies and intensities of the different modes show a monotonic dependence as function of the aspect ratio of the prisms. For shorter or longer prisms, deviations to this behavior are identified thanks to simulations. These modes have symmetric charge distribution and result from the strong coupling of the upper and lower triangular surfaces. They also form a standing wave in the in-plane direction and are identified as quasistatic short range surface plasmons of different orders as emphasized within a continuum dielectric model. This model explains in simple terms the measured and simulated energy and intensity changes as function of geometric parameters. By providing a unified vision of surface plasmons in platelets, such a model should be useful for engineering of the optical properties of metallic nanoplatelets.

  1. Simulating visibility under reduced acuity and contrast sensitivity.

    PubMed

    Thompson, William B; Legge, Gordon E; Kersten, Daniel J; Shakespeare, Robert A; Lei, Quan

    2017-04-01

    Architects and lighting designers have difficulty designing spaces that are accessible to those with low vision, since the complex nature of most architectural spaces requires a site-specific analysis of the visibility of mobility hazards and key landmarks needed for navigation. We describe a method that can be utilized in the architectural design process for simulating the effects of reduced acuity and contrast on visibility. The key contribution is the development of a way to parameterize the simulation using standard clinical measures of acuity and contrast sensitivity. While these measures are known to be imperfect predictors of visual function, they provide a way of characterizing general levels of visual performance that is familiar to both those working in low vision and our target end-users in the architectural and lighting-design communities. We validate the simulation using a letter-recognition task.

  2. Simulating Visibility Under Reduced Acuity and Contrast Sensitivity

    PubMed Central

    Thompson, William B.; Legge, Gordon E.; Kersten, Daniel J.; Shakespeare, Robert A.; Lei, Quan

    2017-01-01

    Architects and lighting designers have difficulty designing spaces that are accessible to those with low vision, since the complex nature of most architectural spaces requires a site-specific analysis of the visibility of mobility hazards and key landmarks needed for navigation. We describe a method that can be utilized in the architectural design process for simulating the effects of reduced acuity and contrast on visibility. The key contribution is the development of a way to parameterize the simulation using standard clinical measures of acuity and contrast sensitivity. While these measures are known to be imperfect predictors of visual function, they provide a way of characterizing general levels of visual performance that is familiar to both those working in low vision and our target end-users in the architectural and lighting design communities. We validate the simulation using a letter recognition task. PMID:28375328

  3. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    PubMed

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  5. The research of edge extraction and target recognition based on inherent feature of objects

    NASA Astrophysics Data System (ADS)

    Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo

    2008-03-01

    Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots fields. The results of simulation experiments and theory analyzing demonstrate that the proposed method could suppress noise effectively, extracted target edges robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.

  6. An undergraduate laboratory activity on molecular dynamics simulations.

    PubMed

    Spitznagel, Benjamin; Pritchett, Paige R; Messina, Troy C; Goadrich, Mark; Rodriguez, Juan

    2016-01-01

    Vision and Change [AAAS, 2011] outlines a blueprint for modernizing biology education by addressing conceptual understanding of key concepts, such as the relationship between structure and function. The document also highlights skills necessary for student success in 21st century Biology, such as the use of modeling and simulation. Here we describe a laboratory activity that allows students to investigate the dynamic nature of protein structure and function through the use of a modeling technique known as molecular dynamics (MD). The activity takes place over two lab periods that are 3 hr each. The first lab period unpacks the basic approach behind MD simulations, beginning with the kinematic equations that all bioscience students learn in an introductory physics course. During this period students are taught rudimentary programming skills in Python while guided through simple modeling exercises that lead up to the simulation of the motion of a single atom. In the second lab period students extend concepts learned in the first period to develop skills in the use of expert MD software. Here students simulate and analyze changes in protein conformation resulting from temperature change, solvation, and phosphorylation. The article will describe how these activities can be carried out using free software packages, including Abalone and VMD/NAMD. © 2016 The International Union of Biochemistry and Molecular Biology.

  7. A review of cutting mechanics and modeling techniques for biological materials.

    PubMed

    Takabi, Behrouz; Tai, Bruce L

    2017-07-01

    This paper presents a comprehensive survey on the modeling of tissue cutting, including both soft tissue and bone cutting processes. In order to achieve higher accuracy in tissue cutting, as a critical process in surgical operations, the meticulous modeling of such processes is important in particular for surgical tool development and analysis. This review paper is focused on the mechanical concepts and modeling techniques utilized to simulate tissue cutting such as cutting forces and chip morphology. These models are presented in two major categories, namely soft tissue cutting and bone cutting. Fracture toughness is commonly used to describe tissue cutting while Johnson-Cook material model is often adopted for bone cutting in conjunction with finite element analysis (FEA). In each section, the most recent mathematical and computational models are summarized. The differences and similarities among these models, challenges, novel techniques, and recommendations for future work are discussed along with each section. This review is aimed to provide a broad and in-depth vision of the methods suitable for tissue and bone cutting simulations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Improving Cognitive Skills of the Industrial Robot

    NASA Astrophysics Data System (ADS)

    Bezák, Pavol

    2015-08-01

    At present, there are plenty of industrial robots that are programmed to do the same repetitive task all the time. Industrial robots doing such kind of job are not able to understand whether the action is correct, effective or good. Object detection, manipulation and grasping is challenging due to the hand and object modeling uncertainties, unknown contact type and object stiffness properties. In this paper, the proposal of an intelligent humanoid hand object detection and grasping model is presented assuming that the object properties are known. The control is simulated in the Matlab Simulink/ SimMechanics, Neural Network Toolbox and Computer Vision System Toolbox.

  9. Fusion Simulation Project Workshop Report

    NASA Astrophysics Data System (ADS)

    Kritz, Arnold; Keyes, David

    2009-03-01

    The mission of the Fusion Simulation Project is to develop a predictive capability for the integrated modeling of magnetically confined plasmas. This FSP report adds to the previous activities that defined an approach to integrated modeling in magnetic fusion. These previous activities included a Fusion Energy Sciences Advisory Committee panel that was charged to study integrated simulation in 2002. The report of that panel [Journal of Fusion Energy 20, 135 (2001)] recommended the prompt initiation of a Fusion Simulation Project. In 2003, the Office of Fusion Energy Sciences formed a steering committee that developed a project vision, roadmap, and governance concepts [Journal of Fusion Energy 23, 1 (2004)]. The current FSP planning effort involved 46 physicists, applied mathematicians and computer scientists, from 21 institutions, formed into four panels and a coordinating committee. These panels were constituted to consider: Status of Physics Components, Required Computational and Applied Mathematics Tools, Integration and Management of Code Components, and Project Structure and Management. The ideas, reported here, are the products of these panels, working together over several months and culminating in a 3-day workshop in May 2007.

  10. Modeling of a microchannel plate working in pulsed mode

    NASA Astrophysics Data System (ADS)

    Secroun, Aurelia; Mens, Alain; Segre, Jacques; Assous, Franck; Piault, Emmanuel; Rebuffie, Jean-Claude

    1997-05-01

    MicroChannel Plates (MCPs) are used in high speed cinematography systems such as MCP framing cameras and streak camera readouts. In order to know the dynamic range or the signal to noise ratio that are available in these devices, a good knowledge of the performances of the MCP is essential. The point of interest of our simulation is the working mode of the microchannel plate--that is light pulsed mode--, in which the signal level is relatively high and its duration can be shorter than the time needed to replenish the wall of the channel, when other papers mainly studied night vision applications with weak continuous and nearly single electron input signal. Also our method allows the simulation of saturation phenomena due to the large number of electrons involved, whereas the discrete models previously used for simulating pulsed mode might not be properly adapted. Here are presented the choices made in modeling the microchannel, more specifically as for the physics laws, the secondary emission parameters and the 3D- geometry. In a last part first results are shown.

  11. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor

    PubMed Central

    Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong

    2011-01-01

    In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104

  12. Visual cues in low-level flight - Implications for pilotage, training, simulation, and enhanced/synthetic vision systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.

    1992-01-01

    This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.

  13. Medical students' professional identity development from being actors in an objective structured teaching exercise.

    PubMed

    De Grasset, Jehanne; Audetat, Marie-Claude; Bajwa, Nadia; Jastrow, Nicole; Richard-Lepouriel, Hélène; Nendaz, Mathieu; Junod Perron, Noelle

    2018-04-22

    Medical students develop professional identity through structured activities and impromptu interactions in various settings. We explored if contributing to an Objective Structured Teaching Exercise (OSTE) influenced students' professional identity development. University clinical faculty members participated in a faculty development program on clinical supervision. Medical students who participated in OSTEs as simulated residents were interviewed in focus groups about what they learnt from the experience and how the experience influenced their vision of learning and teaching. Transcripts were analyzed using the Goldie's personality and social structure perspective model. Twenty-five medical students out of 32 students involved in OSTEs participated. On an institutional level, students developed a feeling of belonging to the institution. At an interactional level, students realized they could influence the teaching interaction by actively seeking or giving feedback. On the personal level, students realized that errors could become sources of learning and felt better prepared to receive faculty feedback. Taking part in OSTEs as a simulated resident has a positive impact on students' vision regarding the institution as a learning environment and their own role by actively seeking or giving feedback. OSTEs support their professional identity development regarding learning and teaching while sustaining faculty development.

  14. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  15. Computational Nanoelectronics and Nanotechnology at NASA ARC

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Kutler, Paul (Technical Monitor)

    1998-01-01

    Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technology are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotechnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.

  16. Computational Nanoelectronics and Nanotechnology at NASA ARC

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    1998-01-01

    Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technolpgy are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotecnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.

  17. Dynamic and predictive links between touch and vision.

    PubMed

    Gray, Rob; Tan, Hong Z

    2002-07-01

    We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.

  18. Binocular adaptive optics visual simulator.

    PubMed

    Fernández, Enrique J; Prieto, Pedro M; Artal, Pablo

    2009-09-01

    A binocular adaptive optics visual simulator is presented. The instrument allows for measuring and manipulating ocular aberrations of the two eyes simultaneously, while the subject performs visual testing under binocular vision. An important feature of the apparatus consists on the use of a single correcting device and wavefront sensor. Aberrations are controlled by means of a liquid-crystal-on-silicon spatial light modulator, where the two pupils of the subject are projected. Aberrations from the two eyes are measured with a single Hartmann-Shack sensor. As an example of the potential of the apparatus for the study of the impact of the eye's aberrations on binocular vision, results of contrast sensitivity after addition of spherical aberration are presented for one subject. Different binocular combinations of spherical aberration were explored. Results suggest complex binocular interactions in the presence of monochromatic aberrations. The technique and the instrument might contribute to the better understanding of binocular vision and to the search for optimized ophthalmic corrections.

  19. Simulating Colour Vision Deficiency from a Spectral Image.

    PubMed

    Shrestha, Raju

    2016-01-01

    People with colour vision deficiency (CVD) have difficulty seeing full colour contrast and can miss some of the features in a scene. As a part of universal design, researcher have been working on how to modify and enhance the colour of images in order to make them see the scene with good contrast. For this, it is important to know how the original colour image is seen by different individuals with CVD. This paper proposes a methodology to simulate accurate colour deficient images from a spectral image using cone sensitivity of different cases of deficiency. As the method enables generation of accurate colour deficient image, the methodology is believed to help better understand the limitations of colour vision deficiency and that in turn leads to the design and development of more effective imaging technologies for better and wider accessibility in the context of universal design.

  20. Role of High-End Computing in Meeting NASA's Science and Engineering Challenges

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak

    2006-01-01

    High-End Computing (HEC) has always played a major role in meeting the modeling and simulation needs of various NASA missions. With NASA's newest 62 teraflops Columbia supercomputer, HEC is having an even greater impact within the Agency and beyond. Significant cutting-edge science and engineering simulations in the areas of space exploration, Shuttle operations, Earth sciences, and aeronautics research, are already occurring on Columbia, demonstrating its ability to accelerate NASA s exploration vision. The talk will describe how the integrated supercomputing production environment is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions.

  1. Interface for Physics Simulation Engines

    NASA Technical Reports Server (NTRS)

    Damer, Bruce

    2007-01-01

    DSS-Prototyper is an open-source, realtime 3D virtual environment software that supports design simulation for the new Vision for Space Exploration (VSE). This is a simulation of NASA's proposed Robotic Lunar Exploration Program, second mission (RLEP2). It simulates the Lunar Surface Access Module (LSAM), which is designed to carry up to four astronauts to the lunar surface for durations of a week or longer. This simulation shows the virtual vehicle making approaches and landings on a variety of lunar terrains. The physics of the descent engine thrust vector, production of dust, and the dynamics of the suspension are all modeled in this set of simulations. The RLEP2 simulations are drivable (by keyboard or joystick) virtual rovers with controls for speed and motor torque, and can be articulated into higher or lower centers of gravity (depending on driving hazards) to enable drill placement. Gravity also can be set to lunar, terrestrial, or zero-g. This software has been used to support NASA's Marshall Space Flight Center in simulations of proposed vehicles for robotically exploring the lunar surface for water ice, and could be used to model all other aspects of the VSE from the Ares launch vehicles and Crew Exploration Vehicle (CEV) to the International Space Station (ISS). This simulator may be installed and operated on any Windows PC with an installed 3D graphics card.

  2. Cost-effectiveness of the screening and treatment of diabetic retinopathy. What are the costs of underutilization?

    PubMed

    Fendrick, A M; Javitt, J C; Chiang, Y P

    1992-01-01

    Diabetic retinal disease remains a leading cause of visual disability among those of working age. Controlled trials have demonstrated that timely diagnosis and photocoagulation treatment can reduce significantly the likelihood of visual impairment in affected diabetic patients. Using a prospective simulation model, we show that an annual screening and treatment program saves thousands of years of vision and reduces medical expenditures over the lifetime of a cohort of Swedish Type I diabetic patients.

  3. Fifth Conference on Artificial Intelligence for Space Applications

    NASA Technical Reports Server (NTRS)

    Odell, Steve L. (Compiler)

    1990-01-01

    The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration.

  4. Aircraft cockpit vision: Math model

    NASA Technical Reports Server (NTRS)

    Bashir, J.; Singh, R. P.

    1975-01-01

    A mathematical model was developed to describe the field of vision of a pilot seated in an aircraft. Given the position and orientation of the aircraft, along with the geometrical configuration of its windows, and the location of an object, the model determines whether the object would be within the pilot's external vision envelope provided by the aircraft's windows. The computer program using this model was implemented and is described.

  5. A sensor simulation framework for the testing and evaluation of external hazard monitors and integrated alerting and notification functions

    NASA Astrophysics Data System (ADS)

    Uijt de Haag, Maarten; Venable, Kyle; Bezawada, Rajesh; Adami, Tony; Vadlamani, Ananth K.

    2009-05-01

    This paper discusses a sensor simulator/synthesizer framework that can be used to test and evaluate various sensor integration strategies for the implementation of an External Hazard Monitor (EHM) and Integrated Alerting and Notification (IAN) function as part of NASA's Integrated Intelligent Flight Deck (IIFD) project. The IIFD project under the NASA's Aviation Safety program "pursues technologies related to the flight deck that ensure crew workload and situational awareness are both safely optimized and adapted to the future operational environment as envisioned by NextGen." Within the simulation framework, various inputs to the IIFD and its subsystems, the EHM and IAN, are simulated, synthesized from actual collected data, or played back from actual flight test sensor data. Sensors and avionics included in this framework are TCAS, ADS-B, Forward-Looking Infrared, Vision cameras, GPS, Inertial navigators, EGPWS, Laser Detection and Ranging sensors, altimeters, communication links with ATC, and weather radar. The framework is implemented in Simulink, a modeling language developed by The Mathworks. This modeling language allows for test and evaluation of various sensor and communication link configurations as well as the inclusion of feedback from the pilot on the performance of the aircraft. Specifically, this paper addresses the architecture of the simulator, the sensor model interfaces, the timing and database (environment) aspects of the sensor models, the user interface of the modeling environment, and the various avionics implementations.

  6. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  7. Inexpensive anatomical trainer for bronchoscopy.

    PubMed

    Di Domenico, Stefano; Simonassi, Claudio; Chessa, Leonardo

    2007-08-01

    Flexible fiberoptic bronchoscopy is an indispensable tool for optimal management of intensive care unit patients. However, the acquisition of sufficient training in bronchoscopy is not straightforward during residency, because of technical and ethical problems. Moreover, the use of commercial simulators is limited by their high cost. In order to overcome these limitations, we realized a low-cost anatomical simulator to acquire and maintain the basic skill to perform bronchoscopy in ventilated patients. We used 1.5 mm diameter iron wire to construct the bronchial tree scaffold; glazier-putty was applied to create the anatomical model. The model was covered by several layers of newspaper strips previously immersed in water and vinilic glue. When the model completely dried up, it was detached from the scaffold by cutting it into six pieces, it was reassembled, painted and fitted with an endotracheal tube. We used very cheap material and the final cost was euro16. The trainer resulted in real-scale and anatomically accurate, with appropriate correspondence on endoscopic view between model and patients. All bronchial segments can be explored and easily identified by endoscopic and external vision. This cheap simulator is a valuable tool for practicing, particularly in a hospital with limited resources for medical training.

  8. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  9. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  10. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  11. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  12. Night vision goggle stimulation using LCoS and DLP projection technology, which is better?

    NASA Astrophysics Data System (ADS)

    Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  13. Use of a vision model to quantify the significance of factors effecting target conspicuity

    NASA Astrophysics Data System (ADS)

    Gilmore, M. A.; Jones, C. K.; Haynes, A. W.; Tolhurst, D. J.; To, M.; Troscianko, T.; Lovell, P. G.; Parraga, C. A.; Pickavance, K.

    2006-05-01

    When designing camouflage it is important to understand how the human visual system processes the information to discriminate the target from the background scene. A vision model has been developed to compare two images and detect differences in local contrast in each spatial frequency channel. Observer experiments are being undertaken to validate this vision model so that the model can be used to quantify the relative significance of different factors affecting target conspicuity. Synthetic imagery can be used to design improved camouflage systems. The vision model is being used to compare different synthetic images to understand what features in the image are important to reproduce accurately and to identify the optimum way to render synthetic imagery for camouflage effectiveness assessment. This paper will describe the vision model and summarise the results obtained from the initial validation tests. The paper will also show how the model is being used to compare different synthetic images and discuss future work plans.

  14. Maintaining a Cognitive Map in Darkness: The Need to Fuse Boundary Knowledge with Path Integration

    PubMed Central

    Cheung, Allen; Ball, David; Milford, Michael; Wyeth, Gordon; Wiles, Janet

    2012-01-01

    Spatial navigation requires the processing of complex, disparate and often ambiguous sensory data. The neurocomputations underpinning this vital ability remain poorly understood. Controversy remains as to whether multimodal sensory information must be combined into a unified representation, consistent with Tolman's “cognitive map”, or whether differential activation of independent navigation modules suffice to explain observed navigation behaviour. Here we demonstrate that key neural correlates of spatial navigation in darkness cannot be explained if the path integration system acted independently of boundary (landmark) information. In vivo recordings demonstrate that the rodent head direction (HD) system becomes unstable within three minutes without vision. In contrast, rodents maintain stable place fields and grid fields for over half an hour without vision. Using a simple HD error model, we show analytically that idiothetic path integration (iPI) alone cannot be used to maintain any stable place representation beyond two to three minutes. We then use a measure of place stability based on information theoretic principles to prove that featureless boundaries alone cannot be used to improve localization above chance level. Having shown that neither iPI nor boundaries alone are sufficient, we then address the question of whether their combination is sufficient and – we conjecture – necessary to maintain place stability for prolonged periods without vision. We addressed this question in simulations and robot experiments using a navigation model comprising of a particle filter and boundary map. The model replicates published experimental results on place field and grid field stability without vision, and makes testable predictions including place field splitting and grid field rescaling if the true arena geometry differs from the acquired boundary map. We discuss our findings in light of current theories of animal navigation and neuronal computation, and elaborate on their implications and significance for the design, analysis and interpretation of experiments. PMID:22916006

  15. CAD-model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda G.

    1988-01-01

    A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.

  16. Performance of computer vision in vivo flow cytometry with low fluorescence contrast

    NASA Astrophysics Data System (ADS)

    Markovic, Stacey; Li, Siyuan; Niedre, Mark

    2015-03-01

    Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20 cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models.

  17. Lessons Learned from the Creation of a Center of Excellence in Low Vision and Vision Rehabilitation in Wenzhou, China

    ERIC Educational Resources Information Center

    Marinoff, Rebecca; Heilberger, Michael H.

    2017-01-01

    A model Center of Excellence in Low Vision and Vision Rehabilitation was created in a health care setting in China utilizing an inter-institutional relationship with a United States optometric institution. Accomplishments of, limitations to, and stimuli to the provision of low vision and vision rehabilitation services are shared.

  18. Small or far away? Size and distance perception in the praying mantis

    PubMed Central

    Bissianna, Geoffrey

    2016-01-01

    Stereo or ‘3D’ vision is an important but costly process seen in several evolutionarily distinct lineages including primates, birds and insects. Many selective advantages could have led to the evolution of stereo vision, including range finding, camouflage breaking and estimation of object size. In this paper, we investigate the possibility that stereo vision enables praying mantises to estimate the size of prey by using a combination of disparity cues and angular size cues. We used a recently developed insect 3D cinema paradigm to present mantises with virtual prey having differing disparity and angular size cues. We predicted that if they were able to use these cues to gauge the absolute size of objects, we should see evidence for size constancy where they would strike preferentially at prey of a particular physical size, across a range of simulated distances. We found that mantises struck most often when disparity cues implied a prey distance of 2.5 cm; increasing the implied distance caused a significant reduction in the number of strikes. We, however, found no evidence for size constancy. There was a significant interaction effect of the simulated distance and angular size on the number of strikes made by the mantis but this was not in the direction predicted by size constancy. This indicates that mantises do not use their stereo vision to estimate object size. We conclude that other selective advantages, not size constancy, have driven the evolution of stereo vision in the praying mantis. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269605

  19. Using Vision System Technologies to Enable Operational Improvements for Low Visibility Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.

    2014-01-01

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  20. Spatial learning while navigating with severely degraded viewing: The role of attention and mobility monitoring

    PubMed Central

    Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.

    2015-01-01

    The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766

  1. DARPA super resolution vision system (SRVS) robust turbulence data collection and analysis

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Leonard, Kevin R.; Thompson, Roger; Tofsted, David; D'Arcy, Sean

    2014-05-01

    Atmospheric turbulence degrades the range performance of military imaging systems, specifically those intended for long range, ground-to-ground target identification. The recent Defense Advanced Research Projects Agency (DARPA) Super Resolution Vision System (SRVS) program developed novel post-processing system components to mitigate turbulence effects on visible and infrared sensor systems. As part of the program, the US Army RDECOM CERDEC NVESD and the US Army Research Laboratory Computational & Information Sciences Directorate (CISD) collaborated on a field collection and atmospheric characterization of a two-handed weapon identification dataset through a diurnal cycle for a variety of ranges and sensor systems. The robust dataset is useful in developing new models and simulations of turbulence, as well for providing as a standard baseline for comparison of sensor systems in the presence of turbulence degradation and mitigation. In this paper, we describe the field collection and atmospheric characterization and present the robust dataset to the defense, sensing, and security community. In addition, we present an expanded model validation of turbulence degradation using the field collected video sequences.

  2. Towards photorealistic and immersive virtual-reality environments for simulated prosthetic vision: integrating recent breakthroughs in consumer hardware and software.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J

    2014-01-01

    Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.

  3. [Establishment of background color to discriminate among tablets: sharper and more feasible with color-weak simulation as access to safe medication].

    PubMed

    Ishizaki, Makiko; Maeda, Hatsuo; Okamoto, Ikuko

    2014-01-01

    Color-weak persons, who in Japan represent approximately 5% of male and 0.2% of female population, may not be able to discriminate among colors of tablets. Thus using color-weak simulation by Variantor™ we evaluated the effects of background colors (light, medium, and dark gray, purple, blue, and blue green) on discrimination among yellow, yellow red, red, and mixed group tablets by our established method. In addition, the influence of white 10-mm ruled squares on background sheets was examined, and the change in color of the tablets and background sheets through the simulation measured. Variance analysis of the data obtained from 42 volunteers demonstrated that with color-weak vision, the best discrimination among yellow, yellow red, or mixed group tablets was achieved on a dark gray background sheet, and a blue background sheet was useful to discriminate among each tablet group in all colors including red. These results were compared with those previously obtained with healthy and cataractous vision, suggesting that gap in color hue and chroma as well as value between background sheets and tablets affects discrimination with color-weak vision. The observed positive effects of white ruled squares, in contrast to those observed on healthy and cataractous vision, demonstrate that a background sheet arranged by two colors allows color-weak persons to discriminate among all sets of tablets in a sharp and feasible manner.

  4. The Effects of Fatal Vision Goggles on Drinking and Driving Intentions in College Students

    ERIC Educational Resources Information Center

    Hennessy, Dwight A.; Lanni-Manley, Elizabeth; Maiorana, Nicole

    2006-01-01

    The present study was designed to examine the effectiveness of Fatal Vision Goggles in reducing intentions to drink and drive. Participants performed a field sobriety task and drove in a traffic simulator while wearing the goggles. A regression analysis was performed in order to predict changes in intentions to drink and drive, using typical…

  5. Pathway concepts experiment for head-down synthetic vision displays

    NASA Astrophysics Data System (ADS)

    Prinzel, Lawrence J., III; Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2004-08-01

    Eight 757 commercial airline captains flew 22 approaches using the Reno Sparks 16R Visual Arrival under simulated Category I conditions. Approaches were flown using a head-down synthetic vision display to evaluate four tunnel ("minimal", "box", "dynamic pathway", "dynamic crow's feet") and three guidance ("ball", "tadpole", "follow-me aircraft") concepts and compare their efficacy to a baseline condition (i.e., no tunnel, ball guidance). The results showed that the tunnel concepts significantly improved pilot performance and situation awareness and lowered workload compared to the baseline condition. The dynamic crow's feet tunnel and follow-me aircraft guidance concepts were found to be the best candidates for future synthetic vision head-down displays. These results are discussed with implications for synthetic vision display design and future research.

  6. Visualization of simulated urban spaces: inferring parameterized generation of streets, parcels, and aerial imagery.

    PubMed

    Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich; Waddell, Paul

    2009-01-01

    Urban simulation models and their visualization are used to help regional planning agencies evaluate alternative transportation investments, land use regulations, and environmental protection policies. Typical urban simulations provide spatially distributed data about number of inhabitants, land prices, traffic, and other variables. In this article, we build on a synergy of urban simulation, urban visualization, and computer graphics to automatically infer an urban layout for any time step of the simulation sequence. In addition to standard visualization tools, our method gathers data of the original street network, parcels, and aerial imagery and uses the available simulation results to infer changes to the original urban layout and produce a new and plausible layout for the simulation results. In contrast with previous work, our approach automatically updates the layout based on changes in the simulation data and thus can scale to a large simulation over many years. The method in this article offers a substantial step forward in building integrated visualization and behavioral simulation systems for use in community visioning, planning, and policy analysis. We demonstrate our method on several real cases using a 200 GB database for a 16,300 km2 area surrounding Seattle.

  7. Comparison of Orion Vision Navigation Sensor Performance from STS-134 and the Space Operations Simulation Center

    NASA Technical Reports Server (NTRS)

    Christian, John A.; Patangan, Mogi; Hinkel, Heather; Chevray, Keiko; Brazzel, Jack

    2012-01-01

    The Orion Multi-Purpose Crew Vehicle is a new spacecraft being designed by NASA and Lockheed Martin for future crewed exploration missions. The Vision Navigation Sensor is a Flash LIDAR that will be the primary relative navigation sensor for this vehicle. To obtain a better understanding of this sensor's performance, the Orion relative navigation team has performed both flight tests and ground tests. This paper summarizes and compares the performance results from the STS-134 flight test, called the Sensor Test for Orion RelNav Risk Mitigation (STORRM) Development Test Objective, and the ground tests at the Space Operations Simulation Center.

  8. Tests for malingering in ophthalmology

    PubMed Central

    Incesu, Ali Ihsan

    2013-01-01

    Simulation can be defined as malingering, or sometimes functional visual loss (FVL). It manifests as either simulating an ophthalmic disease (positive simulation), or denial of ophthalmic disease (negative simulation). Conscious behavior and compensation or indemnity claims are prominent features of simulation. Since some authors suggest that this is a manifestation of underlying psychopathology, even conversion is included in this context. In today's world, every ophthalmologist can face with simulation of ophthalmic disease or disorder. In case of simulation suspect, the physician's responsibility is to prove the simulation considering the disease/disorder first, and simulation as an exclusion. In simulation examinations, the physician should be firm and smart to select appropriate test(s) to convince not only the subject, but also the judge in case of indemnity or compensation trials. Almost all ophthalmic sensory and motor functions including visual acuity, visual field, color vision and night vision can be the subject of simulation. Examiner must be skillful in selecting the most appropriate test. Apart from those in the literature, we included all kinds of simulation in ophthalmology. In addition, simulation examination techniques, such as, use of optical coherence tomography, frequency doubling perimetry (FDP), and modified polarization tests were also included. In this review, we made a thorough literature search, and added our experiences to give the readers up-to-date information on malingering or simulation in ophthalmology. PMID:24195054

  9. Simulation Based Acquisition for NASA's Office of Exploration Systems

    NASA Technical Reports Server (NTRS)

    Hale, Joe

    2004-01-01

    In January 2004, President George W. Bush unveiled his vision for NASA to advance U.S. scientific, security, and economic interests through a robust space exploration program. This vision includes the goal to extend human presence across the solar system, starting with a human return to the Moon no later than 2020, in preparation for human exploration of Mars and other destinations. In response to this vision, NASA has created the Office of Exploration Systems (OExS) to develop the innovative technologies, knowledge, and infrastructures to explore and support decisions about human exploration destinations, including the development of a new Crew Exploration Vehicle (CEV). Within the OExS organization, NASA is implementing Simulation Based Acquisition (SBA), a robust Modeling & Simulation (M&S) environment integrated across all acquisition phases and programs/teams, to make the realization of the President s vision more certain. Executed properly, SBA will foster better informed, timelier, and more defensible decisions throughout the acquisition life cycle. By doing so, SBA will improve the quality of NASA systems and speed their development, at less cost and risk than would otherwise be the case. SBA is a comprehensive, Enterprise-wide endeavor that necessitates an evolved culture, a revised spiral acquisition process, and an infrastructure of advanced Information Technology (IT) capabilities. SBA encompasses all project phases (from requirements analysis and concept formulation through design, manufacture, training, and operations), professional disciplines, and activities that can benefit from employing SBA capabilities. SBA capabilities include: developing and assessing system concepts and designs; planning manufacturing, assembly, transport, and launch; training crews, maintainers, launch personnel, and controllers; planning and monitoring missions; responding to emergencies by evaluating effects and exploring solutions; and communicating across the OExS enterprise, within the Government, and with the general public. The SBA process features empowered collaborative teams (including industry partners) to integrate requirements, acquisition, training, operations, and sustainment. The SBA process also utilizes an increased reliance on and investment in M&S to reduce design risk. SBA originated as a joint Industry and Department of Defense (DoD) initiative to define and integrate an acquisition process that employs robust, collaborative use of M&S technology across acquisition phases and programs. The SBA process was successfully implemented in the Air Force s Joint Strike Fighter (JSF) Program.

  10. Vision Integrating Strategies in Ophthalmology and Neurochemistry (VISION)

    DTIC Science & Technology

    2014-02-01

    ganglion cells from pressure-induced damage in a rat model of glaucoma . Brn3b also induced optic nerve regeneration in this model (Stankowska et al. 2013...of glaucoma o Gene therapy with Neuritin1 structurally and functionally protected the retina in ONC model o CHOP knockout mice were structurally and...retinocollicular pathway of mice in a novel model of glaucoma . 2013 Annual Meeting of Association for Research in Vision and Ophthalmology, Abstract 421. Liu

  11. A stakeholder visioning exercise to enhance chronic care and the integration of community pharmacy services.

    PubMed

    Franco-Trigo, L; Tudball, J; Fam, D; Benrimoj, S I; Sabater-Hernández, D

    2018-02-21

    Collaboration between relevant stakeholders in health service planning enables service contextualization and facilitates its success and integration into practice. Although community pharmacy services (CPSs) aim to improve patients' health and quality of life, their integration in primary care is far from ideal. Key stakeholders for the development of a CPS intended at preventing cardiovascular disease were identified in a previous stakeholder analysis. Engaging these stakeholders to create a shared vision is the subsequent step to focus planning directions and lay sound foundations for future work. This study aims to develop a stakeholder-shared vision of a cardiovascular care model which integrates community pharmacists and to identify initiatives to achieve this vision. A participatory visioning exercise involving 13 stakeholders across the healthcare system was performed. A facilitated workshop, structured in three parts (i.e., introduction; developing the vision; defining the initiatives towards the vision), was designed. The Chronic Care Model inspired the questions that guided the development of the vision. Workshop transcripts, researchers' notes and materials produced by participants were analyzed using qualitative content analysis. Stakeholders broadened the objective of the vision to focus on the management of chronic diseases. Their vision yielded 7 principles for advanced chronic care: patient-centered care; multidisciplinary team approach; shared goals; long-term care relationships; evidence-based practice; ease of access to healthcare settings and services by patients; and good communication and coordination. Stakeholders also delineated six environmental factors that can influence their implementation. Twenty-four initiatives to achieve the developed vision were defined. The principles and factors identified as part of the stakeholder shared-vision were combined in a preliminary model for chronic care. This model and initiatives can guide policy makers as well as healthcare planners and researchers to develop and integrate chronic disease services, namely CPSs, in real-world settings. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    NASA Astrophysics Data System (ADS)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  13. The Level of Vision Necessary for Competitive Performance in Rifle Shooting: Setting the Standards for Paralympic Shooting with Vision Impairment.

    PubMed

    Allen, Peter M; Latham, Keziah; Mann, David L; Ravensbergen, Rianne H J C; Myint, Joy

    2016-01-01

    The aim of this study was to investigate the level of vision impairment (VI) that would reduce performance in shooting; to guide development of entry criteria to visually impaired (VI) shooting. Nineteen international-level shooters without VI took part in the study. Participants shot an air rifle, while standing, toward a regulation target placed at the end of a 10 m shooting range. Cambridge simulation glasses were used to simulate six different levels of VI. Visual acuity (VA) and contrast sensitivity (CS) were assessed along with shooting performance in each of seven conditions of simulated impairment and compared to that with habitual vision. Shooting performance was evaluated by calculating each individual's average score in every level of simulated VI and normalizing this score by expressing it as a percentage of the baseline performance achieved with habitual vision. Receiver Operating Characteristic curves were constructed to evaluate the ability of different VA and CS cut-off criteria to appropriately classify these athletes as achieving 'expected' or 'below expected' shooting results based on their performance with different levels of VA and CS. Shooting performance remained relatively unaffected by mild decreases in VA and CS, but quickly deteriorated with more moderate losses. The ability of visual function measurements to classify shooting performance was good, with 78% of performances appropriately classified using a cut-off of 0.53 logMAR and 74% appropriately classified using a cut-off of 0.83 logCS. The current inclusion criteria for VI shooting (1.0 logMAR) is conservative, maximizing the chance of including only those with an impairment that does impact performance, but potentially excluding some who do have a genuine impairment in the sport. A lower level of impairment would include more athletes who do have a genuine impairment but would potentially include those who do not actually have an impairment that impacts performance in the sport. An impairment to CS could impact performance in the sport and might be considered in determining eligibility to take part in VI competition.

  14. The Level of Vision Necessary for Competitive Performance in Rifle Shooting: Setting the Standards for Paralympic Shooting with Vision Impairment

    PubMed Central

    Allen, Peter M.; Latham, Keziah; Mann, David L.; Ravensbergen, Rianne H. J. C.; Myint, Joy

    2016-01-01

    The aim of this study was to investigate the level of vision impairment (VI) that would reduce performance in shooting; to guide development of entry criteria to visually impaired (VI) shooting. Nineteen international-level shooters without VI took part in the study. Participants shot an air rifle, while standing, toward a regulation target placed at the end of a 10 m shooting range. Cambridge simulation glasses were used to simulate six different levels of VI. Visual acuity (VA) and contrast sensitivity (CS) were assessed along with shooting performance in each of seven conditions of simulated impairment and compared to that with habitual vision. Shooting performance was evaluated by calculating each individual’s average score in every level of simulated VI and normalizing this score by expressing it as a percentage of the baseline performance achieved with habitual vision. Receiver Operating Characteristic curves were constructed to evaluate the ability of different VA and CS cut-off criteria to appropriately classify these athletes as achieving ‘expected’ or ‘below expected’ shooting results based on their performance with different levels of VA and CS. Shooting performance remained relatively unaffected by mild decreases in VA and CS, but quickly deteriorated with more moderate losses. The ability of visual function measurements to classify shooting performance was good, with 78% of performances appropriately classified using a cut-off of 0.53 logMAR and 74% appropriately classified using a cut-off of 0.83 logCS. The current inclusion criteria for VI shooting (1.0 logMAR) is conservative, maximizing the chance of including only those with an impairment that does impact performance, but potentially excluding some who do have a genuine impairment in the sport. A lower level of impairment would include more athletes who do have a genuine impairment but would potentially include those who do not actually have an impairment that impacts performance in the sport. An impairment to CS could impact performance in the sport and might be considered in determining eligibility to take part in VI competition. PMID:27877150

  15. Modeling of pilot's visual behavior for low-level flight

    NASA Astrophysics Data System (ADS)

    Schulte, Axel; Onken, Reiner

    1995-06-01

    Developers of synthetic vision systems for low-level flight simulators deal with the problem to decide which features to incorporate in order to achieve most realistic training conditions. This paper supports an approach to this problem on the basis of modeling the pilot's visual behavior. This approach is founded upon the basic requirement that the pilot's mechanisms of visual perception should be identical in simulated and real low-level flight. Flight simulator experiments with pilots were conducted for knowledge acquisition. During the experiments video material of a real low-level flight mission containing different situations was displayed to the pilot who was acting under a realistic mission assignment in a laboratory environment. Pilot's eye movements could be measured during the replay. The visual mechanisms were divided into rule based strategies for visual navigation, based on the preflight planning process, as opposed to skill based processes. The paper results in a model of the pilot's planning strategy of a visual fixing routine as part of the navigation task. The model is a knowledge based system based upon the fuzzy evaluation of terrain features in order to determine the landmarks used by pilots. It can be shown that a computer implementation of the model selects those features, which were preferred by trained pilots, too.

  16. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  17. The effect of font size and type on reading performance with Arabic words in normally sighted and simulated cataract subjects.

    PubMed

    Alotaibi, Abdullah Z

    2007-05-01

    Previous investigations have shown that reading is the most common functional problem reported by patients at a low vision practice. While there have been studies investigating effect of fonts in normal and low vision patients in English, no study has been carried out in Arabic. Additionally, there has been no investigation into the use of optimum print sizes or fonts that should be used in Arabic books and leaflets for low vision patients. Arabic sentences were read by 100 normally sighted volunteers with and without simulated cataract. Subjects read two font types (Times New Roman and Courier) in three different sizes (N8, N10 and N12). The subjects were asked to read the sentences aloud. The reading speed was calculated as number of words read divided by the time taken, while reading rate was calculated as the number of words read correctly divided by the time taken. There was an improvement in reading performance of normally sighted and simulated visually impaired subjects when the print size increased. There was no significant difference in reading performance between the two types of font used at small print size, however the reading rate improved as print size increased with Times New Roman. The results suggest that the use of N12 print in Times New Roman enhanced reading performance in normally sighted and simulated cataract subjects.

  18. Preliminary Results Obtained in Integrated Safety Analysis of NASA Aviation Safety Program Technologies

    NASA Technical Reports Server (NTRS)

    Reveley, Mary S.

    2003-01-01

    The goal of the NASA Aviation Safety Program (AvSP) is to develop and demonstrate technologies that contribute to a reduction in the aviation fatal accident rate by a factor of 5 by the year 2007 and by a factor of 10 by the year 2022. Integrated safety analysis of day-to-day operations and risks within those operations will provide an understanding of the Aviation Safety Program portfolio. Safety benefits analyses are currently being conducted. Preliminary results for the Synthetic Vision Systems (SVS) and Weather Accident Prevention (WxAP) projects of the AvSP have been completed by the Logistics Management Institute under a contract with the NASA Glenn Research Center. These analyses include both a reliability analysis and a computer simulation model. The integrated safety analysis method comprises two principal components: a reliability model and a simulation model. In the reliability model, the results indicate how different technologies and systems will perform in normal, degraded, and failed modes of operation. In the simulation, an operational scenario is modeled. The primary purpose of the SVS project is to improve safety by providing visual-flightlike situation awareness during instrument conditions. The current analyses are an estimate of the benefits of SVS in avoiding controlled flight into terrain. The scenario modeled has an aircraft flying directly toward a terrain feature. When the flight crew determines that the aircraft is headed toward an obstruction, the aircraft executes a level turn at speed. The simulation is ended when the aircraft completes the turn.

  19. Network and Atomistic Simulations Unveil the Structural Determinants of Mutations Linked to Retinal Diseases

    PubMed Central

    Mariani, Simona; Dell'Orco, Daniele; Felline, Angelo; Raimondi, Francesco; Fanelli, Francesca

    2013-01-01

    A number of incurable retinal diseases causing vision impairments derive from alterations in visual phototransduction. Unraveling the structural determinants of even monogenic retinal diseases would require network-centered approaches combined with atomistic simulations. The transducin G38D mutant associated with the Nougaret Congenital Night Blindness (NCNB) was thoroughly investigated by both mathematical modeling of visual phototransduction and atomistic simulations on the major targets of the mutational effect. Mathematical modeling, in line with electrophysiological recordings, indicates reduction of phosphodiesterase 6 (PDE) recognition and activation as the main determinants of the pathological phenotype. Sub-microsecond molecular dynamics (MD) simulations coupled with Functional Mode Analysis improve the resolution of information, showing that such impairment is likely due to disruption of the PDEγ binding cavity in transducin. Protein Structure Network analyses additionally suggest that the observed slight reduction of theRGS9-catalyzed GTPase activity of transducin depends on perturbed communication between RGS9 and GTP binding site. These findings provide insights into the structural fundamentals of abnormal functioning of visual phototransduction caused by a missense mutation in one component of the signaling network. This combination of network-centered modeling with atomistic simulations represents a paradigm for future studies aimed at thoroughly deciphering the structural determinants of genetic retinal diseases. Analogous approaches are suitable to unveil the mechanism of information transfer in any signaling network either in physiological or pathological conditions. PMID:24009494

  20. Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.

    PubMed

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J

    2014-08-25

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.

  1. Investigation of two methods to quantify noise in digital images based on the perception of the human eye

    NASA Astrophysics Data System (ADS)

    Kleinmann, Johanna; Wueller, Dietmar

    2007-01-01

    Since the signal to noise measuring method as standardized in the normative part of ISO 15739:2002(E)1 does not quantify noise in a way that matches the perception of the human eye, two alternative methods have been investigated which may be appropriate to quantify the noise perception in a physiological manner: - the model of visual noise measurement proposed by Hung et al2 (as described in the informative annex of ISO 15739:20021) which tries to simulate the process of human vision by using the opponent space and contrast sensitivity functions and uses the CIEL*u*v*1976 colour space for the determination of a so called visual noise value. - The S-CIELab model and CIEDE2000 colour difference proposed by Fairchild et al 3 which simulates human vision approximately the same way as Hung et al2 but uses an image comparison afterwards based on CIEDE2000. With a psychophysical experiment based on just noticeable difference (JND), threshold images could be defined, with which the two approaches mentioned above were tested. The assumption is that if the method is valid, the different threshold images should get the same 'noise value'. The visual noise measurement model results in similar visual noise values for all the threshold images. The method is reliable to quantify at least the JND for noise in uniform areas of digital images. While the visual noise measurement model can only evaluate uniform colour patches in images, the S-CIELab model can be used on images with spatial content as well. The S-CIELab model also results in similar colour difference values for the set of threshold images, but with some limitations: for images which contain spatial structures besides the noise, the colour difference varies depending on the contrast of the spatial content.

  2. Image formation simulation for computer-aided inspection planning of machine vision systems

    NASA Astrophysics Data System (ADS)

    Irgenfried, Stephan; Bergmann, Stephan; Mohammadikaji, Mahsa; Beyerer, Jürgen; Dachsbacher, Carsten; Wörn, Heinz

    2017-06-01

    In this work, a simulation toolset for Computer Aided Inspection Planning (CAIP) of systems for automated optical inspection (AOI) is presented along with a versatile two-robot-setup for verification of simulation and system planning results. The toolset helps to narrow down the large design space of optical inspection systems in interaction with a system expert. The image formation taking place in optical inspection systems is simulated using GPU-based real time graphics and high quality off-line-rendering. The simulation pipeline allows a stepwise optimization of the system, from fast evaluation of surface patch visibility based on real time graphics up to evaluation of image processing results based on off-line global illumination calculation. A focus of this work is on the dependency of simulation quality on measuring, modeling and parameterizing the optical surface properties of the object to be inspected. The applicability to real world problems is demonstrated by taking the example of planning a 3D laser scanner application. Qualitative and quantitative comparison results of synthetic and real images are presented.

  3. Infrared imagery acquisition process supporting simulation and real image training

    NASA Astrophysics Data System (ADS)

    O'Connor, John

    2012-05-01

    The increasing use of infrared sensors requires development of advanced infrared training and simulation tools to meet current Warfighter needs. In order to prepare the force, a challenge exists for training and simulation images to be both realistic and consistent with each other to be effective and avoid negative training. The US Army Night Vision and Electronic Sensors Directorate has corrected this deficiency by developing and implementing infrared image collection methods that meet the needs of both real image trainers and real-time simulations. The author presents innovative methods for collection of high-fidelity digital infrared images and the associated equipment and environmental standards. The collected images are the foundation for US Army, and USMC Recognition of Combat Vehicles (ROC-V) real image combat ID training and also support simulations including the Night Vision Image Generator and Synthetic Environment Core. The characteristics, consistency, and quality of these images have contributed to the success of these and other programs. To date, this method has been employed to generate signature sets for over 350 vehicles. The needs of future physics-based simulations will also be met by this data. NVESD's ROC-V image database will support the development of training and simulation capabilities as Warfighter needs evolve.

  4. Performance, Cost, and Financial Parameters of Geothermal District Heating Systems for Market Penetration Modeling under Various Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckers, Koenraad J; Young, Katherine R

    Geothermal district heating (GDH) systems have limited penetration in the U.S., with an estimated installed capacity of only 100 MWth for a total of 21 sites. We see higher deployment in other regions, for example, in Europe with an installed capacity of more than 4,700 MWth for 257 GDH sites. The U.S. Department of Energy Geothermal Vision (GeoVision) Study is currently looking at the potential to increase the deployment in the U.S. and to understand the impact of this increased deployment. This paper reviews 31 performance, cost, and financial parameters as input for numerical simulations describing GDH system deployment inmore » support of the GeoVision effort. The focus is on GDH systems using hydrothermal and Enhanced Geothermal System resources in the U.S.; ground-source heat pumps and heat-to-electricity conversion technology were excluded. Parameters investigated include 1) capital and operation and maintenance costs for both subsurface and surface equipment; 2) performance factors such as resource recovery factors, well flow rates, and system efficiencies; and 3) financial parameters such as inflation, interest, and tax rates. Current values as well as potential future improved values under various scenarios are presented. Sources of data considered include academic and popular literature, software tools such as GETEM and GEOPHIRES, industry interviews, and analysis conducted by other task forces for the GeoVision Study, e.g., on the drilling costs and reservoir performance.« less

  5. Beyond the cockpit: The visual world as a flight instrument

    NASA Technical Reports Server (NTRS)

    Johnson, W. W.; Kaiser, M. K.; Foyle, D. C.

    1992-01-01

    The use of cockpit instruments to guide flight control is not always an option (e.g., low level rotorcraft flight). Under such circumstances the pilot must use out-the-window information for control and navigation. Thus it is important to determine the basis of visually guided flight for several reasons: (1) to guide the design and construction of the visual displays used in training simulators; (2) to allow modeling of visibility restrictions brought about by weather, cockpit constraints, or distortions introduced by sensor systems; and (3) to aid in the development of displays that augment the cockpit window scene and are compatible with the pilot's visual extraction of information from the visual scene. The authors are actively pursuing these questions. We have on-going studies using both low-cost, lower fidelity flight simulators, and state-of-the-art helicopter simulation research facilities. Research results will be presented on: (1) the important visual scene information used in altitude and speed control; (2) the utility of monocular, stereo, and hyperstereo cues for the control of flight; (3) perceptual effects due to the differences between normal unaided daylight vision, and that made available by various night vision devices (e.g., light intensifying goggles and infra-red sensor displays); and (4) the utility of advanced contact displays in which instrument information is made part of the visual scene, as on a 'scene linked' head-up display (e.g., displaying altimeter information on a virtual billboard located on the ground).

  6. The Iowa Model for Pediatric Low Vision Services.

    ERIC Educational Resources Information Center

    Wilkinson, Mark E.; Stewart, Ian; Trantham, Carole S.

    2000-01-01

    This article describes the evolution of Iowa's model of low vision care for students with visual impairments. It reviews the benefits of a transdisciplinary team approach to providing low vision services for children with visual impairments, including a decrease in the number of students requiring large-print materials and related costs. (Contains…

  7. Robust Spatial Autoregressive Modeling for Hardwood Log Inspection

    Treesearch

    Dongping Zhu; A.A. Beex

    1994-01-01

    We explore the application of a stochastic texture modeling method toward a machine vision system for log inspection in the forest products industry. This machine vision system uses computerized tomography (CT) imaging to locate and identify internal defects in hardwood logs. The application of CT to such industrial vision problems requires efficient and robust image...

  8. Pathway Concepts Experiment for Head-Down Synthetic Vision Displays

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2004-01-01

    Eight 757 commercial airline captains flew 22 approaches using the Reno Sparks 16R Visual Arrival under simulated Category I conditions. Approaches were flown using a head-down synthetic vision display to evaluate four tunnel ("minimal", "box", "dynamic pathway", "dynamic crow s feet") and three guidance ("ball", "tadpole", "follow-me aircraft") concepts and compare their efficacy to a baseline condition (i.e., no tunnel, ball guidance). The results showed that the tunnel concepts significantly improved pilot performance and situation awareness and lowered workload compared to the baseline condition. The dynamic crow s feet tunnel and follow-me aircraft guidance concepts were found to be the best candidates for future synthetic vision head-down displays. These results are discussed with implications for synthetic vision display design and future research.

  9. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  10. Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.

    PubMed

    Orchard, Garrick; Jayawant, Ajinkya; Cohen, Gregory K; Thakor, Nitish

    2015-01-01

    Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.

  11. Building a stakeholder's vision of an offshore wind-farm project: A group modeling approach.

    PubMed

    Château, Pierre-Alexandre; Chang, Yang-Chi; Chen, Hsin; Ko, Tsung-Ting

    2012-03-15

    This paper describes a Group Model Building (GMB) initiative that was designed to discuss the various potential effects that an offshore wind-farm may have on its local ecology and socioeconomic development. The representatives of various organizations in the study area, Lu-Kang, Taiwan, have held several meetings, and structured debates have been organized to promote the emergence of a consensual view on the main issues and their implications. A System Dynamics (SD) model has been built and corrected iteratively with the participants through the GMB process. The diverse interests within the group led the process toward the design of multifunctional wind-farms with different modalities. The scenario analyses, using the SD model under various policies, including no wind-farm policy, objectively articulates the vision of the local stakeholders. The results of the SD simulations show that the multifunctional wind-farms may have superior economic effects and the larger wind-farms with bird corridors could reduce ecological impact. However, the participants of the modeling process did not appreciate any type of offshore wind-farm development when considering all of the identified key factors of social acceptance. The insight gained from the study can provide valuable information to actualize feasible strategies for the green energy technique to meet local expectations. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. The impact of loupes and microscopes on vision in endodontics.

    PubMed

    Perrin, P; Neuhaus, K W; Lussi, A

    2014-05-01

    To report on an intraradicular visual test in a simulated clinical setting under different optical conditions. Miniaturized visual tests with E-optotypes (bar distance from 0.01 to 0.05 mm) were fixed inside the root canal system of an extracted maxillary molar at different locations: at the orifice, a depth of 5 mm and the apex. The tooth was mounted in a phantom head for a simulated clinical setting. Unaided vision was compared with Galilean loupes (2.5× magnification) with integrated light source and an operating microscope (6× magnification). The influence of the dentists' age within two groups was evaluated: <40 years (n = 9) and ≥40 years (n = 15). Some younger dentists were able to identify the E-optotypes at the orifice, but otherwise, natural vision did not reveal any measurable result. With Galilean loupes, the younger dentists <40 years could see a 0.05 mm structure at the root canal orifice, in contrast to the older group ≥40 years. Only the microscope allowed the observation of structures inside the root canal, independent of age. Unaided vision and Galilean loupes with an integrated light source could not provide any measurable vision inside the root canal, but younger dentists <40 years could detect with Galilean loupes a canal orifice corresponding to the tip of the smallest endodontic instruments. Dentists over 40 years of age were dependent on the microscope to inspect the root canal system. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  13. Practical color vision tests for air traffic control applicants: en route center and terminal facilities.

    PubMed

    Mertens, H W; Milburn, N J; Collins, W E

    2000-12-01

    Two practical color vision tests were developed and validated for use in screening Air Traffic Control Specialist (ATCS) applicants for work at en route center or terminal facilities. The development of the tests involved careful reproduction/simulation of color-coded materials from the most demanding, safety-critical color task performed in each type of facility. The tests were evaluated using 106 subjects with normal color vision and 85 with color vision deficiency. The en route center test, named the Flight Progress Strips Test (FPST), required the identification of critical red/black coding in computer printing and handwriting on flight progress strips. The terminal option test, named the Aviation Lights Test (ALT), simulated red/green/white aircraft lights that must be identified in night ATC tower operations. Color-coding is a non-redundant source of safety-critical information in both tasks. The FPST was validated by direct comparison of responses to strip reproductions with responses to the original flight progress strips and a set of strips selected independently. Validity was high; Kappa = 0.91 with original strips as the validation criterion and 0.86 with different strips. The light point stimuli of the ALT were validated physically with a spectroradiometer. The reliabilities of the FPST and ALT were estimated with Chronbach's alpha as 0.93 and 0.98, respectively. The high job-relevance, validity, and reliability of these tests increases the effectiveness and fairness of ATCS color vision testing.

  14. Beyond speculative robot ethics: a vision assessment study on the future of the robotic caretaker.

    PubMed

    van der Plas, Arjanna; Smits, Martijntje; Wehrmann, Caroline

    2010-11-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to some promising co-designed robot concepts in which jointly articulated moral guidelines are embedded. With our model, we think to have designed an interesting response on a recent call for a less speculative ethics of technology by encouraging discussions about the quality of positive and negative visions on the future of robotics.

  15. Position estimation and driving of an autonomous vehicle by monocular vision

    NASA Astrophysics Data System (ADS)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  16. Component-based target recognition inspired by human vision

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Agyepong, Kwabena

    2009-05-01

    In contrast with machine vision, human can recognize an object from complex background with great flexibility. For example, given the task of finding and circling all cars (no further information) in a picture, you may build a virtual image in mind from the task (or target) description before looking at the picture. Specifically, the virtual car image may be composed of the key components such as driver cabin and wheels. In this paper, we propose a component-based target recognition method by simulating the human recognition process. The component templates (equivalent to the virtual image in mind) of the target (car) are manually decomposed from the target feature image. Meanwhile, the edges of the testing image can be extracted by using a difference of Gaussian (DOG) model that simulates the spatiotemporal response in visual process. A phase correlation matching algorithm is then applied to match the templates with the testing edge image. If all key component templates are matched with the examining object, then this object is recognized as the target. Besides the recognition accuracy, we will also investigate if this method works with part targets (half cars). In our experiments, several natural pictures taken on streets were used to test the proposed method. The preliminary results show that the component-based recognition method is very promising.

  17. Property-driven functional verification technique for high-speed vision system-on-chip processor

    NASA Astrophysics Data System (ADS)

    Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian

    2017-04-01

    The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.

  18. Vision-based navigation in a dynamic environment for virtual human

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Sun, Ji-Zhou; Zhang, Jia-Wan; Li, Ming-Chu

    2004-06-01

    Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.

  19. Enhanced/synthetic vision and head-worn display technologies for terminal maneuvering area NextGen operations

    NASA Astrophysics Data System (ADS)

    Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Williams, Steven P.; Bailey, Randall E.; Shelton, Kevin J.; Norman, R. Mike

    2011-06-01

    NASA is researching innovative technologies for the Next Generation Air Transportation System (NextGen) to provide a "Better-Than-Visual" (BTV) capability as adjunct to "Equivalent Visual Operations" (EVO); that is, airport throughputs equivalent to that normally achieved during Visual Flight Rules (VFR) operations rates with equivalent and better safety in all weather and visibility conditions including Instrument Meteorological Conditions (IMC). These new technologies build on proven flight deck systems and leverage synthetic and enhanced vision systems. Two piloted simulation studies were conducted to access the use of a Head-Worn Display (HWD) with head tracking for synthetic and enhanced vision systems concepts. The first experiment evaluated the use a HWD for equivalent visual operations to San Francisco International Airport (airport identifier: KSFO) compared to a visual concept and a head-down display concept. A second experiment evaluated symbology variations under different visibility conditions using a HWD during taxi operations at Chicago O'Hare airport (airport identifier: KORD). Two experiments were conducted, one in a simulated San Francisco airport (KSFO) approach operation and the other, in simulated Chicago O'Hare surface operations, evaluating enhanced/synthetic vision and head-worn display technologies for NextGen operations. While flying a closely-spaced parallel approach to KSFO, pilots rated the HWD, under low-visibility conditions, equivalent to the out-the-window condition, under unlimited visibility, in terms of situational awareness (SA) and mental workload compared to a head-down enhanced vision system. There were no differences between the 3 display concepts in terms of traffic spacing and distance and the pilot decision-making to land or go-around. For the KORD experiment, the visibility condition was not a factor in pilot's rating of clutter effects from symbology. Several concepts for enhanced implementations of an unlimited field-of-regard BTV concept for low-visibility surface operations were determined to be equivalent in pilot ratings of efficacy and usability.

  20. Enhanced/Synthetic Vision and Head-Worn Display Technologies for Terminal Maneuvering Area NextGen Operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Prinzell, Lawrence J.; Williams, Steven P.; Bailey, Randall E.; Shelton, Kevin J.; Norman, R. Mike

    2011-01-01

    NASA is researching innovative technologies for the Next Generation Air Transportation System (NextGen) to provide a "Better-Than-Visual" (BTV) capability as adjunct to "Equivalent Visual Operations" (EVO); that is, airport throughputs equivalent to that normally achieved during Visual Flight Rules (VFR) operations rates with equivalent and better safety in all weather and visibility conditions including Instrument Meteorological Conditions (IMC). These new technologies build on proven flight deck systems and leverage synthetic and enhanced vision systems. Two piloted simulation studies were conducted to access the use of a Head-Worn Display (HWD) with head tracking for synthetic and enhanced vision systems concepts. The first experiment evaluated the use a HWD for equivalent visual operations to San Francisco International Airport (airport identifier: KSFO) compared to a visual concept and a head-down display concept. A second experiment evaluated symbology variations under different visibility conditions using a HWD during taxi operations at Chicago O'Hare airport (airport identifier: KORD). Two experiments were conducted, one in a simulated San Francisco airport (KSFO) approach operation and the other, in simulated Chicago O'Hare surface operations, evaluating enhanced/synthetic vision and head-worn display technologies for NextGen operations. While flying a closely-spaced parallel approach to KSFO, pilots rated the HWD, under low-visibility conditions, equivalent to the out-the-window condition, under unlimited visibility, in terms of situational awareness (SA) and mental workload compared to a head-down enhanced vision system. There were no differences between the 3 display concepts in terms of traffic spacing and distance and the pilot decision-making to land or go-around. For the KORD experiment, the visibility condition was not a factor in pilot's rating of clutter effects from symbology. Several concepts for enhanced implementations of an unlimited field-of-regard BTV concept for low-visibility surface operations were determined to be equivalent in pilot ratings of efficacy and usability.

  1. The 1988 Goddard Conference on Space Applications of Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Rash, James (Editor); Hughes, Peter (Editor)

    1988-01-01

    This publication comprises the papers presented at the 1988 Goddard Conference on Space Applications of Artificial Intelligence held at the NASA/Goddard Space Flight Center, Greenbelt, Maryland on May 24, 1988. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in these proceedings fall into the following areas: mission operations support, planning and scheduling; fault isolation/diagnosis; image processing and machine vision; data management; modeling and simulation; and development tools/methodologies.

  2. A sensory origin for color-word stroop effects in aging: simulating age-related changes in color-vision mimics age-related changes in Stroop.

    PubMed

    Ben-David, Boaz M; Schneider, Bruce A

    2010-11-01

    An increase in Stroop effects with age can be interpreted as reflecting age-related reductions in selective attention, cognitive slowing, or color-vision. In the present study, 88 younger adults performed a Stroop test with two color-sets, saturated and desaturated, to simulate an age-related decrease in color perception. This color manipulation with younger adults was sufficient to lead to an increase in Stroop effects that mimics age-effects. We conclude that age-related changes in color perception can contribute to the differences in Stroop effects observed in aging. Finally, we suggest that the clinical applications of Stroop take this factor into account.

  3. Two-Year Community: Implementing Vision and Change in a Community College Classroom

    ERIC Educational Resources Information Center

    Lysne, Steven; Miller, Brant

    2015-01-01

    The purpose of this article is to describe a model for teaching introductory biology coursework within the Vision and Change framework (American Association for the Advancement of Science, 2011). The intent of the new model is to transform instruction by adopting an active, student-centered, and inquiry-based pedagogy consistent with Vision and…

  4. Synthetic and Enhanced Vision Systems for NextGen (SEVS) Simulation and Flight Test Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Shelton, Kevin J.; Kramer, Lynda J.; Ellis,Kyle K.; Rehfeld, Sherri A.

    2012-01-01

    The Synthetic and Enhanced Vision Systems for NextGen (SEVS) simulation and flight tests are jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA). The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SEVS operational and system-level performance capabilities. Nine test flights (38 flight hours) were conducted over the summer and fall of 2011. The evaluations were flown in Gulfstream.s G450 flight test aircraft outfitted with the SEVS technology under very low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 ft to 2400 ft visibility) into various airports from Louisiana to Maine. In-situ flight performance and subjective workload and acceptability data were collected in collaboration with ground simulation studies at LaRC.s Research Flight Deck simulator.

  5. Large-Scale NASA Science Applications on the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Brooks, Walter

    2005-01-01

    Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.

  6. Vision correction via multi-layer pattern corneal surgery

    NASA Astrophysics Data System (ADS)

    Sun, Han-Yin; Wang, Hsiang-Chen; Yang, Shun-Fa

    2013-07-01

    With the rapid development of vision correction techniques, increasing numbers of people have undergone laser vision corrective surgery in recent years. The use of a laser scalpel instead of a traditional surgical knife reduces the size of the wound and quickens recovery after surgery. The primary objective of this article is to examine corneal surgery for vision correction via multi-layer swim-ring-shaped wave circles of the technique through optical simulations with the use of Monte-Carlo ray tracing method. Presbyopia stems from the loss of flexibility of crystalline lens due to aging of the eyeball. Diopter adjustment of a normal crystalline lens could reach 5 D; in the case of presbyopia, the adjustment was approximately 1 D, which made patients unable to see objects clearly in near distance. Corneal laser surgery with multi-layer swim-ring-shaped wave circles was performed, which ablated multiple circles on the cornea to improve flexibility of the crystalline lens. Simulation results showed that the ability of the crystalline lens to adjust increased tremendously from 1 D to 4 D. The method was also used to compare the images displayed on the retina before and after the treatment. The results clearly indicated a significant improvement in presbyopia symptoms with the use of this technique.

  7. The use of films to simulate age-related declines in yellow vision.

    PubMed

    Yoshida, C A; Sakuraba, S

    1996-06-01

    One of characteristics of normal age-related vision losses depends on yellow-intensity in the lens of the eye. (1) We investigated discrimination between seven intensities of yellow in 303 elderly people aged from late 60s to early 90s. The results demonstrated that the failures of vision increase with age, and the losses depend on yellow intensity. (2) We got a yellow index (YI) from different Y-intensity color charts used in (I) above, covering 12 kinds of marketable yellow films, and selected two kinds of films which match (YI) original color charts, corresponding to 53% or 89% of Y intensity. (3) Finally, we judged that all of these colors' xy-chromaticities with or without the two films, were exactly on the unique-yellow line in the diagram, which means a pure yellow, not mixed. (4) Then, these two films could simulate each of the mid-level or high-level Y intensity, respectively, in age-related vision. (5) We analyzed changes of all kinds of colors (220) in xy-chromaticity diagrams and obtained mean changing distances from every original chromatogram compared to the others. These data would be useful for architects or designers to design cities or buildings for use by the elderly.

  8. Impact of Oncoming Headlight Glare With Cataracts: A Pilot Study

    PubMed Central

    Hwang, Alex D.; Tuccar-Burak, Merve; Goldstein, Robert; Peli, Eli

    2018-01-01

    Purpose: Oncoming headlight glare (HLG) reduces the visibility of objects on the road and may affect the safety of nighttime driving. With cataracts, the impact of oncoming HLG is expected to be more severe. We used our custom HLG simulator in a driving simulator to measure the impact of HLG on pedestrian detection by normal vision subjects with simulated mild cataracts and by patients with real cataracts. Methods: Five normal vision subjects drove nighttime scenarios under two HLG conditions (with and without HLG: HLGY and HLGN, respectively), and three vision conditions (with plano lens, simulated mild cataract, and optically blurred clip-on). Mild cataract was simulated by applying a 0.8 Bangerter diffusion foil to clip-on plano lenses. The visual acuity with the optically blurred lenses was individually chosen to match the visual acuity with the simulated cataract clip-ons under HLGN. Each nighttime driving scenario contains 24 pedestrian encounters, encompassing four pedestrian types; walking along the left side of the road, walking along the right side of the road, crossing the road from left to right, and crossing the road from right to left. Pedestrian detection performances of five patients with mild real cataracts were measured using the same setup. The cataract patients were tested only in HLGY and HLGN conditions. Participants’ visual acuity and contrast sensitivity were also measured in the simulator with and without stationary HLG. Results: For normal vision subjects, both the presence of oncoming HLG and wearing the simulated cataract clip-on reduced pedestrian detection performance. The subjects performed worst in events where the pedestrian crossed from the left, followed by events where the pedestrian crossed from the right. Significant interactions between HLG condition and other factors were also found: (1) the impact of oncoming HLG with the simulated cataract clip-on was larger than with the plano lens clip-on, (2) the impact of oncoming HLG was larger with the optically blurred clip-on than with the plano lens clip-on, but smaller than with the simulated cataract clip-on, and (3) the impact was larger for the pedestrians that crossed from the left than those that crossed from the right, and for the pedestrians walking along the left side of the road than walking along the right side of the road, suggesting that the pedestrian proximity to the glare source contributed to the performance reduction. Under HLGN, almost no pedestrians were missed with the plano lens or the simulated cataract clip-on (0 and 0.5%, respectively), but under HLGY, the rate of pedestrian misses increased to 0.5 and 6%, respectively. With the optically blurred clip-on, the percent of missed pedestrians under HLGN and HLGY did not change much (5% and 6%, respectively). Untimely response rate increased under HLGY with the plano lens and simulated cataract clip-ons, but the increase with the simulated cataract clip-on was significantly larger than with the plano lens clip-on. The contrast sensitivity with the simulated cataract clip-on was significantly degraded under HLGY. The visual acuity with the plano lens clip-on was significantly improved under HLGY, possibly due to pupil myosis. The impact of HLG measured for real cataract patients was similar to the impact on performance of normal vision subjects with simulated cataract clip-ons. Conclusion: Even with mild (simulated or real) cataracts, a substantial negative effect of oncoming HLG was measurable in the detection of crossing and walking-along pedestrians. The lowered pedestrian detection rates and longer response times with HLGY demonstrate a possible risk that oncoming HLG poses to patients driving with cataracts. PMID:29559933

  9. Integrating interactive computational modeling in biology curricula.

    PubMed

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  10. Vision Effects: A Critical Gap in Educational Leadership Research

    ERIC Educational Resources Information Center

    Kantabutra, Sooksan

    2010-01-01

    Purpose: Although leaders are widely believed to employ visions, little is known about what constitutes an "effective" vision, particularly in the higher education sector. This paper seeks to proposes a research model for examining relationships between vision components and performance of higher education institutions, as measured by financial…

  11. A Model for Integrating Low Vision Services into Educational Programs.

    ERIC Educational Resources Information Center

    Jose, Randall T.; And Others

    1988-01-01

    A project integrating low-vision services into children's educational programs comprised four components: teacher training, functional vision evaluations for each child, a clinical examination by an optometrist, and follow-up visits with the optometrist to evaluate the prescribed low-vision aids. Educational implications of the project and project…

  12. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  13. Fusion of Synthetic and Enhanced Vision for All-Weather Commercial Aviation Operations

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence, III

    2007-01-01

    NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were not adversely impacted by the display concepts although the addition of Enhanced Vision did not, unto itself, provide an improvement in runway incursion detection.

  14. Quantum vision in three dimensions

    NASA Astrophysics Data System (ADS)

    Roth, Yehuda

    We present four models for describing a 3-D vision. Similar to the mirror scenario, our models allow 3-D vision with no need for additional accessories such as stereoscopic glasses or a hologram film. These four models are based on brain interpretation rather than pure objective encryption. We consider the observer "subjective" selection of a measuring device and the corresponding quantum collapse into one of his selected states, as a tool for interpreting reality in according to the observer concepts. This is the basic concept of our study and it is introduced in the first model. Other models suggests "soften" versions that might be much easier to implement. Our quantum interpretation approach contribute to the following fields. In technology the proposed models can be implemented into real devices, allowing 3-D vision without additional accessories. Artificial intelligence: In the desire to create a machine that exchange information by using human terminologies, our interpretation approach seems to be appropriate.

  15. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

    PubMed

    Kriegeskorte, Nikolaus

    2015-11-24

    Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

  16. Leveraging Simulation Against the F-16 Flying Training Gap

    DTIC Science & Technology

    2005-11-01

    must leverage emerging simulation technology into combined flight training to counter mission employment complexity created by technology itself...two or more of these stand-alone simulators creates a mission training center (MTC), which when further networked create distributed mission...operations (DMO). Ultimately, the grand operational vision of DMO is to interconnect non-collocated users creating a “virtual” joint training environment

  17. A Summary of Proceedings for the Advanced Deployable Day/Night Simulation Symposium

    DTIC Science & Technology

    2009-07-01

    initiated to design , develop, and deliver transportable visual simulations that jointly provide night-vision and high-resolution daylight capability. The...Deployable Day/Night Simulation (ADDNS) Technology Demonstration Project was initiated to design , develop, and deliver transportable visual...was Dr. Richard Wildes (York University); Mr. Vitaly Zholudev (Department of Computer Science, York University), Mr. X. Zhu (Neptec Design Group), and

  18. Force Protection via UGV-UAV Collaboration: Development of Control Law for Vision Based Target Tracking on SUAV

    DTIC Science & Technology

    2007-12-01

    Hardware - In - Loop , Piccolo, UAV, Unmanned Aerial Vehicle 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT...Maneuvering Target.......................... 35 C. HARDWARE - IN - LOOP SIMULATION............................................... 37 1. Hardware - In - Loop Setup...law as proposed in equation (23) is capable of tracking a maneuvering target. C. HARDWARE - IN - LOOP SIMULATION The intention of HIL simulation

  19. Performance of computer vision in vivo flow cytometry with low fluorescence contrast

    PubMed Central

    Markovic, Stacey; Li, Siyuan; Niedre, Mark

    2015-01-01

    Abstract. Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20  cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models. PMID:25822954

  20. Interdisciplinary multisensory fusion: design lessons from professional architects

    NASA Astrophysics Data System (ADS)

    Geiger, Ray W.; Snell, J. T.

    1992-11-01

    Psychocybernetic systems engineering design conceptualization is mimicking the evolutionary path of habitable environmental design and the professional practice of building architecture, construction, and facilities management. Human efficacy for innovation in architectural design has always reflected more the projected perceptual vision of the designer visa vis the hierarchical spirit of the design process. In pursuing better ways to build and/or design things, we have found surprising success in exploring certain more esoteric applications. One of those applications is the vision of an artistic approach in/and around creative problem solving. Our evaluation in research into vision and visual systems associated with environmental design and human factors has led us to discover very specific connections between the human spirit and quality design. We would like to share those very qualitative and quantitative parameters of engineering design, particularly as it relates to multi-faceted and interdisciplinary design practice. Discussion will cover areas of cognitive ergonomics, natural modeling sources, and an open architectural process of means and goal satisfaction, qualified by natural repetition, gradation, rhythm, contrast, balance, and integrity of process. One hypothesis is that the kinematic simulation of perceived connections between hard and soft sciences, centering on the life sciences and life in general, has become a very effective foundation for design theory and application.

  1. Visual and haptic integration in the estimation of softness of deformable objects

    PubMed Central

    Cellini, Cristiano; Kaim, Lukas; Drewing, Knut

    2013-01-01

    Softness perception intrinsically relies on haptic information. However, through everyday experiences we learn correspondences between felt softness and the visual effects of exploratory movements that are executed to feel softness. Here, we studied how visual and haptic information is integrated to assess the softness of deformable objects. Participants discriminated between the softness of two softer or two harder objects using only-visual, only-haptic or both visual and haptic information. We assessed the reliabilities of the softness judgments using the method of constant stimuli. In visuo-haptic trials, discrepancies between the two senses' information allowed us to measure the contribution of the individual senses to the judgments. Visual information (finger movement and object deformation) was simulated using computer graphics; input in visual trials was taken from previous visuo-haptic trials. Participants were able to infer softness from vision alone, and vision considerably contributed to bisensory judgments (∼35%). The visual contribution was higher than predicted from models of optimal integration (senses are weighted according to their reliabilities). Bisensory judgments were less reliable than predicted from optimal integration. We conclude that the visuo-haptic integration of softness information is biased toward vision, rather than being optimal, and might even be guided by a fixed weighting scheme. PMID:25165510

  2. Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction

    PubMed Central

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J.

    2014-01-01

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546

  3. Transforming revenue management.

    PubMed

    Silveria, Richard; Alliegro, Debra; Nudd, Steven

    2008-11-01

    Healthcare organizations that want to undertake a patient administrative/revenue management transformation should: Define the vision with underlying business objectives and key performance measures. Strategically partner with key vendors for business process development and technology design. Create a program organization and governance infrastructure. Develop a corporate design model that defines the standards for operationalizing the vision. Execute the vision through technology deployment and corporate design model implementation.

  4. Enhanced vision flight deck technology for commercial aircraft low-visibility surface operations

    NASA Astrophysics Data System (ADS)

    Arthur, Jarvis J.; Norman, R. M.; Kramer, Lynda J.; Prinzel, Lawerence J.; Ellis, Kyle K.; Harrison, Stephanie J.; Comstock, J. R.

    2013-05-01

    NASA Langley Research Center and the FAA collaborated in an effort to evaluate the effect of Enhanced Vision (EV) technology display in a commercial flight deck during low visibility surface operations. Surface operations were simulated at the Memphis, TN (FAA identifier: KMEM) airfield during nighttime with 500 Runway Visual Range (RVR) in a high-fidelity, full-motion simulator. Ten commercial airline flight crews evaluated the efficacy of various EV display locations and parallax and minification effects. The research paper discusses qualitative and quantitative results of the simulation experiment, including the effect of EV display placement on visual attention, as measured by the use of non-obtrusive oculometry and pilot mental workload. The results demonstrated the potential of EV technology to enhance situation awareness which is dependent on the ease of access and location of the displays. Implications and future directions are discussed.

  5. Enhanced Vision Flight Deck Technology for Commercial Aircraft Low-Visibility Surface Operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Norman, R. Michael; Kramer, Lynda J.; Prinzel, Lawrence J., III; Ellis, Kyle K. E.; Harrison, Stephanie J.; Comstock, J. Ray

    2013-01-01

    NASA Langley Research Center and the FAA collaborated in an effort to evaluate the effect of Enhanced Vision (EV) technology display in a commercial flight deck during low visibility surface operations. Surface operations were simulated at the Memphis, TN (FAA identifier: KMEM) air field during nighttime with 500 Runway Visual Range (RVR) in a high-fidelity, full-motion simulator. Ten commercial airline flight crews evaluated the efficacy of various EV display locations and parallax and mini cation effects. The research paper discusses qualitative and quantitative results of the simulation experiment, including the effect of EV display placement on visual attention, as measured by the use of non-obtrusive oculometry and pilot mental workload. The results demonstrated the potential of EV technology to enhance situation awareness which is dependent on the ease of access and location of the displays. Implications and future directions are discussed.

  6. Processing of Lunar Soil Simulant for Space Exploration Applications

    NASA Technical Reports Server (NTRS)

    Sen, Subhayu; Ray, Chandra S.; Reddy, Ramana

    2005-01-01

    NASA's long-term vision for space exploration includes developing human habitats and conducting scientific investigations on planetary bodies, especially on Moon and Mars. To reduce the level of up-mass processing and utilization of planetary in-situ resources is recognized as an important element of this vision. Within this scope and context, we have undertaken a general effort aimed primarily at extracting and refining metals, developing glass, glass-ceramic, or traditional ceramic type materials using lunar soil simulants. In this paper we will present preliminary results on our effort on carbothermal reduction of oxides for elemental extraction and zone refining for obtaining high purity metals. In additions we will demonstrate the possibility of developing glasses from lunar soil simulant for fixing nuclear waste from potential nuclear power generators on planetary bodies. Compositional analysis, x-ray diffraction patterns and differential thermal analysis of processed samples will be presented.

  7. Mechanisms, functions and ecology of colour vision in the honeybee.

    PubMed

    Hempel de Ibarra, N; Vorobyev, M; Menzel, R

    2014-06-01

    Research in the honeybee has laid the foundations for our understanding of insect colour vision. The trichromatic colour vision of honeybees shares fundamental properties with primate and human colour perception, such as colour constancy, colour opponency, segregation of colour and brightness coding. Laborious efforts to reconstruct the colour vision pathway in the honeybee have provided detailed descriptions of neural connectivity and the properties of photoreceptors and interneurons in the optic lobes of the bee brain. The modelling of colour perception advanced with the establishment of colour discrimination models that were based on experimental data, the Colour-Opponent Coding and Receptor Noise-Limited models, which are important tools for the quantitative assessment of bee colour vision and colour-guided behaviours. Major insights into the visual ecology of bees have been gained combining behavioural experiments and quantitative modelling, and asking how bee vision has influenced the evolution of flower colours and patterns. Recently research has focussed on the discrimination and categorisation of coloured patterns, colourful scenes and various other groupings of coloured stimuli, highlighting the bees' behavioural flexibility. The identification of perceptual mechanisms remains of fundamental importance for the interpretation of their learning strategies and performance in diverse experimental tasks.

  8. Design of direct-vision cyclo-olefin-polymer double Amici prism for spectral imaging.

    PubMed

    Wang, Lei; Shao, Zhengzheng; Tang, Wusheng; Liu, Jiying; Nie, Qianwen; Jia, Hui; Dai, Suian; Zhu, Jubo; Li, Xiujian

    2017-10-20

    A direct-vision Amici prism is a desired dispersion element in the value of spectrometers and spectral imaging systems. In this paper, we focus on designing a direct-vision cyclo-olefin-polymer double Amici prism for spectral imaging systems. We illustrate a designed structure: E48R/N-SF4/E48R, from which we obtain 13 deg dispersion across the visible spectrum, which is equivalent to 700 line pairs/mm grating. We construct a simulative spectral imaging system with the designed direct-vision cyclo-olefin-polymer double Amici prism in optical design software and compare its imaging performance to a glass double Amici prism in the same system. The results of spot-size RMS demonstrate that the plastic prism can serve as well as their glass competitors and have better spectral resolution.

  9. Generation and use of human 3D-CAD models

    NASA Astrophysics Data System (ADS)

    Grotepass, Juergen; Speyer, Hartmut; Kaiser, Ralf

    2002-05-01

    Individualized Products are one of the ten mega trends of the 21st Century with human modeling as the key issue for tomorrow's design and product development. The use of human modeling software for computer based ergonomic simulations within the production process increases quality while reducing costs by 30- 50 percent and shortening production time. This presentation focuses on the use of human 3D-CAD models for both, the ergonomic design of working environments and made to measure garment production. Today, the entire production chain can be designed, individualized models generated and analyzed in 3D computer environments. Anthropometric design for ergonomics is matched to human needs, thus preserving health. Ergonomic simulation includes topics as human vision, reachability, kinematics, force and comfort analysis and international design capabilities. In German more than 17 billions of Mark are moved to other industries, because clothes do not fit. Individual clothing tailored to the customer's preference means surplus value, pleasure and perfect fit. The body scanning technology is the key to generation and use of human 3D-CAD models for both, the ergonomic design of working environments and made to measure garment production.

  10. Relative advantages of dichromatic and trichromatic color vision in camouflage breaking.

    PubMed

    Troscianko, Jolyon; Wilson-Aggarwal, Jared; Griffiths, David; Spottiswoode, Claire N; Stevens, Martin

    2017-01-01

    There is huge diversity in visual systems and color discrimination abilities, thought to stem from an animal's ecology and life history. Many primate species maintain a polymorphism in color vision, whereby most individuals are dichromats but some females are trichromats, implying that selection sometimes favors dichromatic vision. Detecting camouflaged prey is thought to be a task where dichromatic individuals could have an advantage. However, previous work either has not been able to disentangle camouflage detection from other ecological or social explanations, or did not use biologically relevant cryptic stimuli to test this hypothesis under controlled conditions. Here, we used online "citizen science" games to test how quickly humans could detect cryptic birds (incubating nightjars) and eggs (of nightjars, plovers and coursers) under trichromatic and simulated dichromatic viewing conditions. Trichromats had an overall advantage, although there were significant differences in performance between viewing conditions. When searching for consistently shaped and patterned adult nightjars, simulated dichromats were more heavily influenced by the degree of pattern difference than were trichromats, and were poorer at detecting prey with inferior pattern and luminance camouflage. When searching for clutches of eggs-which were more variable in appearance and shape than the adult nightjars-the simulated dichromats learnt to detect the clutches faster, but were less sensitive to subtle luminance differences. These results suggest there are substantial differences in the cues available under viewing conditions that simulate different receptor types, and that these interact with the scene in complex ways to affect camouflage breaking.

  11. Relative advantages of dichromatic and trichromatic color vision in camouflage breaking

    PubMed Central

    Wilson-Aggarwal, Jared; Griffiths, David; Spottiswoode, Claire N.; Stevens, Martin

    2017-01-01

    Abstract There is huge diversity in visual systems and color discrimination abilities, thought to stem from an animal’s ecology and life history. Many primate species maintain a polymorphism in color vision, whereby most individuals are dichromats but some females are trichromats, implying that selection sometimes favors dichromatic vision. Detecting camouflaged prey is thought to be a task where dichromatic individuals could have an advantage. However, previous work either has not been able to disentangle camouflage detection from other ecological or social explanations, or did not use biologically relevant cryptic stimuli to test this hypothesis under controlled conditions. Here, we used online “citizen science” games to test how quickly humans could detect cryptic birds (incubating nightjars) and eggs (of nightjars, plovers and coursers) under trichromatic and simulated dichromatic viewing conditions. Trichromats had an overall advantage, although there were significant differences in performance between viewing conditions. When searching for consistently shaped and patterned adult nightjars, simulated dichromats were more heavily influenced by the degree of pattern difference than were trichromats, and were poorer at detecting prey with inferior pattern and luminance camouflage. When searching for clutches of eggs—which were more variable in appearance and shape than the adult nightjars—the simulated dichromats learnt to detect the clutches faster, but were less sensitive to subtle luminance differences. These results suggest there are substantial differences in the cues available under viewing conditions that simulate different receptor types, and that these interact with the scene in complex ways to affect camouflage breaking. PMID:29622920

  12. Vision-based flight control in the hawkmoth Hyles lineata

    PubMed Central

    Windsor, Shane P.; Bomphrey, Richard J.; Taylor, Graham K.

    2014-01-01

    Vision is a key sensory modality for flying insects, playing an important role in guidance, navigation and control. Here, we use a virtual-reality flight simulator to measure the optomotor responses of the hawkmoth Hyles lineata, and use a published linear-time invariant model of the flight dynamics to interpret the function of the measured responses in flight stabilization and control. We recorded the forces and moments produced during oscillation of the visual field in roll, pitch and yaw, varying the temporal frequency, amplitude or spatial frequency of the stimulus. The moths’ responses were strongly dependent upon contrast frequency, as expected if the optomotor system uses correlation-type motion detectors to sense self-motion. The flight dynamics model predicts that roll angle feedback is needed to stabilize the lateral dynamics, and that a combination of pitch angle and pitch rate feedback is most effective in stabilizing the longitudinal dynamics. The moths’ responses to roll and pitch stimuli coincided qualitatively with these functional predictions. The moths produced coupled roll and yaw moments in response to yaw stimuli, which could help to reduce the energetic cost of correcting heading. Our results emphasize the close relationship between physics and physiology in the stabilization of insect flight. PMID:24335557

  13. Vision-based flight control in the hawkmoth Hyles lineata.

    PubMed

    Windsor, Shane P; Bomphrey, Richard J; Taylor, Graham K

    2014-02-06

    Vision is a key sensory modality for flying insects, playing an important role in guidance, navigation and control. Here, we use a virtual-reality flight simulator to measure the optomotor responses of the hawkmoth Hyles lineata, and use a published linear-time invariant model of the flight dynamics to interpret the function of the measured responses in flight stabilization and control. We recorded the forces and moments produced during oscillation of the visual field in roll, pitch and yaw, varying the temporal frequency, amplitude or spatial frequency of the stimulus. The moths' responses were strongly dependent upon contrast frequency, as expected if the optomotor system uses correlation-type motion detectors to sense self-motion. The flight dynamics model predicts that roll angle feedback is needed to stabilize the lateral dynamics, and that a combination of pitch angle and pitch rate feedback is most effective in stabilizing the longitudinal dynamics. The moths' responses to roll and pitch stimuli coincided qualitatively with these functional predictions. The moths produced coupled roll and yaw moments in response to yaw stimuli, which could help to reduce the energetic cost of correcting heading. Our results emphasize the close relationship between physics and physiology in the stabilization of insect flight.

  14. Model identification and vision-based H∞ position control of 6-DoF cable-driven parallel robots

    NASA Astrophysics Data System (ADS)

    Chellal, R.; Cuvillon, L.; Laroche, E.

    2017-04-01

    This paper presents methodologies for the identification and control of 6-degrees of freedom (6-DoF) cable-driven parallel robots (CDPRs). First a two-step identification methodology is proposed to accurately estimate the kinematic parameters independently and prior to the dynamic parameters of a physics-based model of CDPRs. Second, an original control scheme is developed, including a vision-based position controller tuned with the H∞ methodology and a cable tension distribution algorithm. The position is controlled in the operational space, making use of the end-effector pose measured by a motion-tracking system. A four-block H∞ design scheme with adjusted weighting filters ensures good trajectory tracking and disturbance rejection properties for the CDPR system, which is a nonlinear-coupled MIMO system with constrained states. The tension management algorithm generates control signals that maintain the cables under feasible tensions. The paper makes an extensive review of the available methods and presents an extension of one of them. The presented methodologies are evaluated by simulations and experimentally on a redundant 6-DoF INCA 6D CDPR with eight cables, equipped with a motion-tracking system.

  15. Sustainable and Smart City Planning Using Spatial Data in Wallonia

    NASA Astrophysics Data System (ADS)

    Stephenne, N.; Beaumont, B.; Hallot, E.; Wolff, E.; Poelmans, L.; Baltus, C.

    2016-09-01

    Simulating population distribution and land use changes in space and time offer opportunities for smart city planning. It provides a holistic and dynamic vision of fast changing urban environment to policy makers. Impacts, such as environmental and health risks or mobility issues, of policies can be assessed and adapted consequently. In this paper, we suppose that "Smart" city developments should be sustainable, dynamic and participative. This paper addresses these three smart objectives in the context of urban risk assessment in Wallonia, Belgium. The sustainable, dynamic and participative solution includes (i) land cover and land use mapping using remote sensing and GIS, (ii) population density mapping using dasymetric mapping, (iii) predictive modelling of land use changes and population dynamics and (iv) risk assessment. The comprehensive and long-term vision of the territory should help to draw sustainable spatial planning policies, to adapt remote sensing acquisition, to update GIS data and to refine risk assessment from regional to city scale.

  16. Using Scenarios and Simulations to Plan Colleges

    ERIC Educational Resources Information Center

    McIntyre, Chuck

    2004-01-01

    Using a case study, this article describes a method by which higher education institutions construct and use multiple future scenarios and simulations to plan strategically: to create visions of their futures, chart broad directions (mission and goals), and select learning and delivery strategies so as to achieve those broad directions. The…

  17. Visions, Strategic Planning, and Quality--More than Hype.

    ERIC Educational Resources Information Center

    Kaufman, Roger

    1996-01-01

    Discusses the need to shift from the old models for organizational development to the new methods of quality management and continuous improvement, visions and visioning, and strategic planning, despite inappropriate criticisms they receive. (AEF)

  18. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase

    PubMed Central

    Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling

    2015-01-01

    In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate. PMID:26378533

  19. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase.

    PubMed

    Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling

    2015-09-10

    In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate.

  20. Evaluation of Fused Synthetic and Enhanced Vision Display Concepts for Low-Visibility Approach and Landing

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III; Wilz, Susan J.

    2009-01-01

    NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. Improvements in lateral path control performance were realized when the Head-Up Display concepts included a tunnel, independent of the imagery (enhanced vision or fusion of enhanced and synthetic vision) presented with it. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, of itself, provide an improvement in runway incursion detection without being specifically tailored for this application.

  1. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  2. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  3. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  4. Psychophysical Vision Simulation of Diffractive Bifocal and Trifocal Intraocular Lenses

    PubMed Central

    Brezna, Wolfgang; Lux, Kirsten; Dragostinoff, Nikolaus; Krutzler, Christian; Plank, Nicole; Tobisch, Rainer; Boltz, Agnes; Garhöfer, Gerhard; Told, Reinhard; Witkowska, Katarzyna; Schmetterer, Leopold

    2016-01-01

    Purpose The visual performance of monofocal, bifocal, and trifocal intraocular lenses was evaluated by human individuals using a vision simulator device. This allowed investigation of the visual impression after cataract surgery, without the need actually to implant the lenses. Methods The randomized, double-masked, three-way cross-over study was conducted on 60 healthy male and female subjects aged between 18 and 35 years. Visual acuity (Early Treatment Diabetic Retinopathy Study; ETDRS) and contrast sensitivity tests (Pelli-Robson) under different lighting conditions (luminosities from 0.14–55 cd/m2, mesopic to photopic) were performed at different distances. Results Visual acuity tests showed no difference for corrected distance visual acuity data of bi- and trifocal lens prototypes (P = 0.851), but better results for the trifocal than for the bifocal lenses at distance corrected intermediate (P = 0.021) and distance corrected near visual acuity (P = 0.044). Contrast sensitivity showed no differences between bifocal and trifocal lenses at the distant (P = 0.984) and at the near position (P = 0.925), but better results for the trifocal lens at the intermediate position (P = 0.043). Visual acuity and contrast sensitivity showed a strong dependence on luminosity (P < 0.001). Conclusions At all investigated distances and all lighting conditions, the trifocal lens prototype often performed better, but never worse than the bifocal lens prototype. Translational Relevance The vision simulator can fill the gap between preclinical lens development and implantation studies by providing information of the perceived vision quality after cataract surgery without implantation. This can reduce implantation risks and promotes the development of new lens concepts due to the cost effective test procedure. PMID:27777828

  5. Project Magnify: Increasing Reading Skills in Students with Low Vision

    ERIC Educational Resources Information Center

    Farmer, Jeanie; Morse, Stephen E.

    2007-01-01

    Modeled after Project PAVE (Corn et al., 2003) in Tennessee, Project Magnify is designed to test the idea that students with low vision who use individually prescribed magnification devices for reading will perform as well as or better than students with low vision who use large-print reading materials. Sixteen students with low vision were…

  6. Panoramic stereo sphere vision

    NASA Astrophysics Data System (ADS)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  7. Contrast Sensitivity With a Subretinal Prosthesis and Implications for Efficient Delivery of Visual Information

    PubMed Central

    Goetz, Georges; Smith, Richard; Lei, Xin; Galambos, Ludwig; Kamins, Theodore; Mathieson, Keith; Sher, Alexander; Palanker, Daniel

    2015-01-01

    Purpose To evaluate the contrast sensitivity of a degenerate retina stimulated by a photovoltaic subretinal prosthesis, and assess the impact of low contrast sensitivity on transmission of visual information. Methods We measure ex vivo the full-field contrast sensitivity of healthy rat retina stimulated with white light, and the contrast sensitivity of degenerate rat retina stimulated with a subretinal prosthesis at frequencies exceeding flicker fusion (>20 Hz). Effects of eye movements on retinal ganglion cell (RGC) activity are simulated using a linear–nonlinear model of the retina. Results Retinal ganglion cells adapt to high frequency stimulation of constant intensity, and respond transiently to changes in illumination of the implant, exhibiting responses to ON-sets, OFF-sets, and both ON- and OFF-sets of light. The percentage of cells with an OFF response decreases with progression of the degeneration, indicating that OFF responses are likely mediated by photoreceptors. Prosthetic vision exhibits reduced contrast sensitivity and dynamic range, with 65% contrast changes required to elicit responses, as compared to the 3% (OFF) to 7% (ON) changes with visible light. The maximum number of action potentials elicited with prosthetic stimulation is at most half of its natural counterpart for the ON pathway. Our model predicts that for most visual scenes, contrast sensitivity of prosthetic vision is insufficient for triggering RGC activity by fixational eye movements. Conclusions Contrast sensitivity of prosthetic vision is 10 times lower than normal, and dynamic range is two times below natural. Low contrast sensitivity and lack of OFF responses hamper delivery of visual information via a subretinal prosthesis. PMID:26540657

  8. Machine-Vision Aids for Improved Flight Operations

    NASA Technical Reports Server (NTRS)

    Menon, P. K.; Chatterji, Gano B.

    1996-01-01

    The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed.

  9. Machine Learning, deep learning and optimization in computer vision

    NASA Astrophysics Data System (ADS)

    Canu, Stéphane

    2017-03-01

    As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.

  10. CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Slotnick, Jeffrey; Khodadoust, Abdollah; Alonso, Juan; Darmofal, David; Gropp, William; Lurie, Elizabeth; Mavriplis, Dimitri

    2014-01-01

    This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030.

  11. The effects of absence of stereopsis on performance of a simulated surgical task in two-dimensional and three-dimensional viewing conditions

    PubMed Central

    Bloch, Edward; Uddin, Nabil; Gannon, Laura; Rantell, Khadija; Jain, Saurabh

    2015-01-01

    Background Stereopsis is believed to be advantageous for surgical tasks that require precise hand-eye coordination. We investigated the effects of short-term and long-term absence of stereopsis on motor task performance in three-dimensional (3D) and two-dimensional (2D) viewing conditions. Methods 30 participants with normal stereopsis and 15 participants with absent stereopsis performed a simulated surgical task both in free space under direct vision (3D) and via a monitor (2D), with both eyes open and one eye covered in each condition. Results The stereo-normal group scored higher, on average, than the stereo-absent group with both eyes open under direct vision (p<0.001). Both groups performed comparably in monocular and binocular monitor viewing conditions (p=0.579). Conclusions High-grade stereopsis confers an advantage when performing a fine motor task under direct vision. However, stereopsis does not appear advantageous to task performance under 2D viewing conditions, such as in video-assisted surgery. PMID:25185439

  12. Multi-spectrum-based enhanced synthetic vision system for aircraft DVE operations

    NASA Astrophysics Data System (ADS)

    Kashyap, Sudesh K.; Naidu, V. P. S.; Shanthakumar, N.

    2016-04-01

    This paper focus on R&D being carried out at CSIR-NAL on Enhanced Synthetic Vision System (ESVS) for Indian regional transport aircraft to enhance all weather operational capabilities with safety and pilot Situation Awareness (SA) improvements. Flight simulator has been developed to study ESVS related technologies and to develop ESVS operational concepts for all weather approach and landing and to provide quantitative and qualitative information that could be used to develop criteria for all-weather approach and landing at regional airports in India. Enhanced Vision System (EVS) hardware prototype with long wave Infrared sensor and low light CMOS camera is used to carry out few field trials on ground vehicle at airport runway at different visibility conditions. Data acquisition and playback system has been developed to capture EVS sensor data (image) in time synch with test vehicle inertial navigation data during EVS field experiments and to playback the experimental data on ESVS flight simulator for ESVS research and concept studies. Efforts are on to conduct EVS flight experiments on CSIR-NAL research aircraft HANSA in Degraded Visual Environment (DVE).

  13. Software model of a machine vision system based on the common house fly.

    PubMed

    Madsen, Robert; Barrett, Steven; Wilcox, Michael

    2005-01-01

    The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.

  14. Interactive Tools for Measuring Visual Scanning Performance and Reaction Time

    PubMed Central

    Seeanner, Julia; Hennessy, Sarah; Manganelli, Joseph; Crisler, Matthew; Rosopa, Patrick; Jenkins, Casey; Anderson, Michael; Drouin, Nathalie; Belle, Leah; Truesdail, Constance; Tanner, Stephanie

    2017-01-01

    Occupational therapists are constantly searching for engaging, high-technology interactive tasks that provide immediate feedback to evaluate and train clients with visual scanning deficits. This study examined the relationship between two tools: the VISION COACH™ interactive light board and the Functional Object Detection© (FOD) Advanced driving simulator scenario. Fifty-four healthy drivers, ages 21–66 yr, were divided into three age groups. Participants performed braking response and visual target (E) detection tasks of the FOD Advanced driving scenario, followed by two sets of three trials using the VISION COACH Full Field 60 task. Results showed no significant effect of age on FOD Advanced performance but a significant effect of age on VISION COACH performance. Correlations showed that participants’ performance on both braking and E detection tasks were significantly positively correlated with performance on the VISION COACH (.37 < r < .40, p < .01). These tools provide new options for therapists. PMID:28218598

  15. Evaluation of visual acuity with Gen 3 night vision goggles

    NASA Technical Reports Server (NTRS)

    Bradley, Arthur; Kaiser, Mary K.

    1994-01-01

    Using laboratory simulations, visual performance was measured at luminance and night vision imaging system (NVIS) radiance levels typically encountered in the natural nocturnal environment. Comparisons were made between visual performance with unaided vision and that observed with subjects using image intensification. An Amplified Night Vision Imaging System (ANVIS6) binocular image intensifier was used. Light levels available in the experiments (using video display technology and filters) were matched to those of reflecting objects illuminated by representative night-sky conditions (e.g., full moon, starlight). Results show that as expected, the precipitous decline in foveal acuity experienced with decreasing mesopic luminance levels is effectively shifted to much lower light levels by use of an image intensification system. The benefits of intensification are most pronounced foveally, but still observable at 20 deg eccentricity. Binocularity provides a small improvement in visual acuity under both intensified and unintensified conditions.

  16. Technical Challenges in the Development of a NASA Synthetic Vision System Concept

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Parrish, Russell V.; Kramer, Lynda J.; Harrah, Steve; Arthur, J. J., III

    2002-01-01

    Within NASA's Aviation Safety Program, the Synthetic Vision Systems Project is developing display system concepts to improve pilot terrain/situation awareness by providing a perspective synthetic view of the outside world through an on-board database driven by precise aircraft positioning information updating via Global Positioning System-based data. This work is aimed at eliminating visibility-induced errors and low visibility conditions as a causal factor to civil aircraft accidents, as well as replicating the operational benefits of clear day flight operations regardless of the actual outside visibility condition. Synthetic vision research and development activities at NASA Langley Research Center are focused around a series of ground simulation and flight test experiments designed to evaluate, investigate, and assess the technology which can lead to operational and certified synthetic vision systems. The technical challenges that have been encountered and that are anticipated in this research and development activity are summarized.

  17. Mediated-reality magnification for macular degeneration rehabilitation

    NASA Astrophysics Data System (ADS)

    Martin-Gonzalez, Anabel; Kotliar, Konstantin; Rios-Martinez, Jorge; Lanzl, Ines; Navab, Nassir

    2014-10-01

    Age-related macular degeneration (AMD) is a gradually progressive eye condition, which is one of the leading causes of blindness and low vision in the Western world. Prevailing optical visual aids compensate part of the lost visual function, but omitting helpful complementary information. This paper proposes an efficient magnification technique, which can be implemented on a head-mounted display, for improving vision of patients with AMD, by preserving global information of the scene. Performance of the magnification approach is evaluated by simulating central vision loss in normally sighted subjects. Visual perception was measured as a function of text reading speed and map route following speed. Statistical analysis of experimental results suggests that our magnification method improves reading speed 1.2 times and spatial orientation to find routes on a map 1.5 times compared to a conventional magnification approach, being capable to enhance peripheral vision of AMD subjects along with their life quality.

  18. PICASSO VISION instrument design, engineering model test results, and flight model development status

    NASA Astrophysics Data System (ADS)

    Näsilä, Antti; Holmlund, Christer; Mannila, Rami; Näkki, Ismo; Ojanen, Harri J.; Akujärvi, Altti; Saari, Heikki; Fussen, Didier; Pieroux, Didier; Demoulin, Philippe

    2016-10-01

    PICASSO - A PICo-satellite for Atmospheric and Space Science Observations is an ESA project led by the Belgian Institute for Space Aeronomy, in collaboration with VTT Technical Research Centre of Finland Ltd, Clyde Space Ltd. (UK) and Centre Spatial de Liège (BE). The test campaign for the engineering model of the PICASSO VISION instrument, a miniaturized nanosatellite spectral imager, has been successfully completed. The test results look very promising. The proto-flight model of VISION has also been successfully integrated and it is waiting for the final integration to the satellite platform.

  19. 21st Century Lunar Exploration: Advanced Radiation Exposure Assessment

    NASA Technical Reports Server (NTRS)

    Anderson, Brooke; Clowdsley, Martha; Wilson, John; Nealy, John; Luetke, Nathan

    2006-01-01

    On January 14, 2004 President George W Bush outlined a new vision for NASA that has humans venturing back to the moon by 2020. With this ambitious goal, new tools and models have been developed to help define and predict the amount of space radiation astronauts will be exposed to during transit and habitation on the moon. A representative scenario is used that includes a trajectory from LEO to a Lunar Base, and simplified CAD models for the transit and habitat structures. For this study galactic cosmic rays, solar proton events, and trapped electron and proton environments are simulated using new dynamic environment models to generate energetic electron, and light and heavy ion fluences. Detailed calculations are presented to assess the human exposure for transit segments and surface stays.

  20. Identification of ground targets from airborne platforms

    NASA Astrophysics Data System (ADS)

    Doe, Josh; Boettcher, Evelyn; Miller, Brian

    2009-05-01

    The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) sensor performance models predict the ability of soldiers to perform a specified military discrimination task using an EO/IR sensor system. Increasingly EO/IR systems are being used on manned and un-manned aircraft for surveillance and target acquisition tasks. In response to this emerging requirement, the NVESD Modeling and Simulation division has been tasked to compare target identification performance between ground-to-ground and air-to-ground platforms for both IR and visible spectra for a set of wheeled utility vehicles. To measure performance, several forced choice experiments were designed and administered and the results analyzed. This paper describes these experiments and reports the results as well as the NVTherm model calibration factors derived for the infrared imagery.

  1. Sensor fusion IV: Control paradigms and data structures; Proceedings of the Meeting, Boston, MA, Nov. 12-15, 1991

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.

  2. CaseMIDAS - A reactive planning architecture for the man-machine integration design and analysis system

    NASA Technical Reports Server (NTRS)

    Pease, R. Adam

    1995-01-01

    MIDAS is a set of tools which allow a designer to specify the physical and functional characteristics of a complex system such as an aircraft cockpit, and analyze the system with regard to human performance. MIDAS allows for a number of static analyses such as military standard reach and fit analysis, display legibility analysis, and vision polars. It also supports dynamic simulation of mission segments with 3d visualization. MIDAS development has incorporated several models of human planning behavior. The CaseMIDAS effort has been to provide a simplified and unified approach to modeling task selection behavior. Except for highly practiced, routine procedures, a human operator exhibits a cognitive effort while determining what step to take next in the accomplishment of mission tasks. Current versions of MIDAS do not model this effort in a consistent and inclusive manner. CaseMIDAS also attempts to address this issue. The CaseMIDAS project has yielded an easy to use software module for case creation and execution which is integrated with existing MIDAS simulation components.

  3. Color matrix display simulation based upon luminance and chromatic contrast sensitivity of early vision

    NASA Technical Reports Server (NTRS)

    Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.

    1992-01-01

    This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.

  4. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    NASA Astrophysics Data System (ADS)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  5. Bioinspired decision architectures containing host and microbiome processing units.

    PubMed

    Heyde, K C; Gallagher, P W; Ruder, W C

    2016-09-27

    Biomimetic robots have been used to explore and explain natural phenomena ranging from the coordination of ants to the locomotion of lizards. Here, we developed a series of decision architectures inspired by the information exchange between a host organism and its microbiome. We first modeled the biochemical exchanges of a population of synthetically engineered E. coli. We then built a physical, differential drive robot that contained an integrated, onboard computer vision system. A relay was established between the simulated population of cells and the robot's microcontroller. By placing the robot within a target-containing a two-dimensional arena, we explored how different aspects of the simulated cells and the robot's microcontroller could be integrated to form hybrid decision architectures. We found that distinct decision architectures allow for us to develop models of computation with specific strengths such as runtime efficiency or minimal memory allocation. Taken together, our hybrid decision architectures provide a new strategy for developing bioinspired control systems that integrate both living and nonliving components.

  6. The effects of a flexible visual acuity-driven ranibizumab treatment regimen in age-related macular degeneration: outcomes of a drug and disease model.

    PubMed

    Holz, Frank G; Korobelnik, Jean-François; Lanzetta, Paolo; Mitchell, Paul; Schmidt-Erfurth, Ursula; Wolf, Sebastian; Markabi, Sabri; Schmidli, Heinz; Weichselberger, Andreas

    2010-01-01

    Differences in treatment responses to ranibizumab injections observed within trials involving monthly (MARINA and ANCHOR studies) and quarterly (PIER study) treatment suggest that an individualized treatment regimen may be effective in neovascular age-related macular degeneration. In the present study, a drug and disease model was used to evaluate the impact of an individualized, flexible treatment regimen on disease progression. For visual acuity (VA), a model was developed on the 12-month data from ANCHOR, MARINA, and PIER. Data from untreated patients were used to model patient-specific disease progression in terms of VA loss. Data from treated patients from the period after the three initial injections were used to model the effect of predicted ranibizumab vitreous concentration on VA loss. The model was checked by comparing simulations of VA outcomes after monthly and quarterly injections during this period with trial data. A flexible VA-guided regimen (after the three initial injections) in which treatment is initiated by loss of >5 letters from best previously observed VA scores was simulated. Simulated monthly and quarterly VA-guided regimens showed good agreement with trial data. Simulation of VA-driven individualized treatment suggests that this regimen, on average, sustains the initial gains in VA seen in clinical trials at month 3. The model predicted that, on average, to maintain initial VA gains, an estimated 5.1 ranibizumab injections are needed during the 9 months after the three initial monthly injections, which amounts to a total of 8.1 injections during the first year. A flexible, individualized VA-guided regimen after the three initial injections may sustain vision improvement with ranibizumab and could improve cost-effectiveness and convenience and reduce drug administration-associated risks.

  7. Atoms of recognition in human and computer vision.

    PubMed

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-08

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  8. Collaborative enterprise and virtual prototyping (CEVP): a product-centric approach to distributed simulation

    NASA Astrophysics Data System (ADS)

    Saunders, Vance M.

    1999-06-01

    The downsizing of the Department of Defense (DoD) and the associated reduction in budgets has re-emphasized the need for commonality, reuse, and standards with respect to the way DoD does business. DoD has implemented significant changes in how it buys weapon systems. The new emphasis is on concurrent engineering with Integrated Product and Process Development and collaboration with Integrated Product Teams. The new DoD vision includes Simulation Based Acquisition (SBA), a process supported by robust, collaborative use of simulation technology that is integrated across acquisition phases and programs. This paper discusses the Air Force Research Laboratory's efforts to use Modeling and Simulation (M&S) resources within a Collaborative Enterprise Environment to support SBA and other Collaborative Enterprise and Virtual Prototyping (CEVP) applications. The paper will discuss four technology areas: (1) a Processing Ontology that defines a hierarchically nested set of collaboration contexts needed to organize and support multi-disciplinary collaboration using M&S, (2) a partial taxonomy of intelligent agents needed to manage different M&S resource contributions to advancing the state of product development, (3) an agent- based process for interfacing disparate M&S resources into a CEVP framework, and (4) a Model-View-Control based approach to defining `a new way of doing business' for users of CEVP frameworks/systems.

  9. Integrated Evaluation of Closed Loop Air Revitalization System Components

    NASA Technical Reports Server (NTRS)

    Murdock, K.

    2010-01-01

    NASA s vision and mission statements include an emphasis on human exploration of space, which requires environmental control and life support technologies. This Contractor Report (CR) describes the development and evaluation of an Air Revitalization System, modeling and simulation of the components, and integrated hardware testing with the goal of better understanding the inherent capabilities and limitations of this closed loop system. Major components integrated and tested included a 4-Bed Modular Sieve, Mechanical Compressor Engineering Development Unit, Temperature Swing Adsorption Compressor, and a Sabatier Engineering and Development Unit. The requisite methodolgy and technical results are contained in this CR.

  10. 1988 Goddard Conference on Space Applications of Artificial Intelligence, Greenbelt, MD, May 24, 1988, Proceedings

    NASA Technical Reports Server (NTRS)

    Rash, James L. (Editor)

    1988-01-01

    This publication comprises the papers presented at the 1988 Goddard Conference on Space Applications of Artificial Intelligence held at the NASA/Goddard Space Flight Center, Greenbelt, Maryland on May 24, 1988. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in these proceedings fall into the following areas: mission operations support, planning and scheduling; fault isolation/diagnosis; image processing and machine vision; data management; modeling and simulation; and development tools methodologies.

  11. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  12. Computational models of human vision with applications

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.

  13. An overview of quantitative approaches in Gestalt perception.

    PubMed

    Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H

    2016-09-01

    Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Integrating National Space Visions

    NASA Technical Reports Server (NTRS)

    Sherwood, Brent

    2006-01-01

    This paper examines value proposition assumptions for various models nations may use to justify, shape, and guide their space programs. Nations organize major societal investments like space programs to actualize national visions represented by leaders as investments in the public good. The paper defines nine 'vision drivers' that circumscribe the motivations evidently underpinning national space programs. It then describes 19 fundamental space activity objectives (eight extant and eleven prospective) that nations already do or could in the future use to actualize the visions they select. Finally the paper presents four contrasting models of engagement among nations, and compares these models to assess realistic pounds on the pace of human progress in space over the coming decades. The conclusion is that orthogonal engagement, albeit unlikely because it is unprecedented, would yield the most robust and rapid global progress.

  15. Graded effects in hierarchical figure-ground organization: reply to Peterson (1999).

    PubMed

    Vecera, S P; O'Reilly, R C

    2000-06-01

    An important issue in vision research concerns the order of visual processing. S. P. Vecera and R. C. O'Reilly (1998) presented an interactive, hierarchical model that placed figure-ground segregation prior to object recognition. M. A. Peterson (1999) critiqued this model, arguing that because it used ambiguous stimulus displays, figure-ground processing did not precede object processing. In the current article, the authors respond to Peterson's (1999) interpretation of ambiguity in the model and her interpretation of what it means for figure-ground processing to come before object recognition. The authors argue that complete stimulus ambiguity is not critical to the model and that figure-ground precedes object recognition architecturally in the model. The arguments are supported with additional simulation results and an experiment, demonstrating that top-down inputs can influence figure-ground organization in displays that contain stimulus cues.

  16. Commercial Flight Crew Decision-Making during Low-Visibility Approach Operations Using Fused Synthetic/Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III

    2007-01-01

    NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.

  17. Encoding electric signals by Gymnotus omarorum: heuristic modeling of tuberous electroreceptor organs.

    PubMed

    Cilleruelo, Esteban R; Caputi, Angel Ariel

    2012-01-24

    The role of different substructures of electroreceptor organs in signal encoding was explored using a heuristic computational model. This model consists of four modules representing the pre-receptor structures, the transducer cells, the synapses and the afferent fiber, respectively. Simulations reproduced previously obtained experimental data. We showed that different electroreceptor types described in the literature can be qualitative modeled with the same set of equations by changing only two parameters, one affecting the filtering properties of the pre-receptor, and the other affecting the transducer module. We studied the responses of different electroreceptor types to natural stimuli using simulations derived from an experimentally-obtained database in which the fish were exposed to resistive or capacitive objects. Our results indicate that phase and frequency spectra are differentially encoded by different subpopulations of tuberous electroreceptors. A different type of receptor responses to the same input is a necessary condition for encoding a multidimensional space of stimuli as in the waveform of the EOD. Our simulation analysis suggested that the electroreceptive mosaic may perform a waveform analysis of electrosensory signals. As in color vision or tactile texture perception, a secondary attribute, "electric color" may be encoded as a parallel activity of various electroreceptor types. This article is part of a Special Issue entitled Neural Coding. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. The Role of Prototype Learning in Hierarchical Models of Vision

    ERIC Educational Resources Information Center

    Thomure, Michael David

    2014-01-01

    I conduct a study of learning in HMAX-like models, which are hierarchical models of visual processing in biological vision systems. Such models compute a new representation for an image based on the similarity of image sub-parts to a number of specific patterns, called prototypes. Despite being a central piece of the overall model, the issue of…

  19. A New Vision for Institutional Research

    ERIC Educational Resources Information Center

    Swing, Randy L.; Ross, Leah Ewing

    2016-01-01

    A new vision for institutional research is urgently needed if colleges and universities are to achieve their institutional missions, goals, and purposes. The authors advocate for a move away from the traditional service model of institutional research to an institutional research function via a federated network model or matrix network model. When…

  20. Architectural frameworks: defining the structures for implementing learning health systems.

    PubMed

    Lessard, Lysanne; Michalowski, Wojtek; Fung-Kee-Fung, Michael; Jones, Lori; Grudniewicz, Agnes

    2017-06-23

    The vision of transforming health systems into learning health systems (LHSs) that rapidly and continuously transform knowledge into improved health outcomes at lower cost is generating increased interest in government agencies, health organizations, and health research communities. While existing initiatives demonstrate that different approaches can succeed in making the LHS vision a reality, they are too varied in their goals, focus, and scale to be reproduced without undue effort. Indeed, the structures necessary to effectively design and implement LHSs on a larger scale are lacking. In this paper, we propose the use of architectural frameworks to develop LHSs that adhere to a recognized vision while being adapted to their specific organizational context. Architectural frameworks are high-level descriptions of an organization as a system; they capture the structure of its main components at varied levels, the interrelationships among these components, and the principles that guide their evolution. Because these frameworks support the analysis of LHSs and allow their outcomes to be simulated, they act as pre-implementation decision-support tools that identify potential barriers and enablers of system development. They thus increase the chances of successful LHS deployment. We present an architectural framework for LHSs that incorporates five dimensions-goals, scientific, social, technical, and ethical-commonly found in the LHS literature. The proposed architectural framework is comprised of six decision layers that model these dimensions. The performance layer models goals, the scientific layer models the scientific dimension, the organizational layer models the social dimension, the data layer and information technology layer model the technical dimension, and the ethics and security layer models the ethical dimension. We describe the types of decisions that must be made within each layer and identify methods to support decision-making. In this paper, we outline a high-level architectural framework grounded in conceptual and empirical LHS literature. Applying this architectural framework can guide the development and implementation of new LHSs and the evolution of existing ones, as it allows for clear and critical understanding of the types of decisions that underlie LHS operations. Further research is required to assess and refine its generalizability and methods.

  1. Progress in building a cognitive vision system

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Yue, Hong

    2016-05-01

    We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.

  2. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  3. Monte Carlo simulations shed light on Bathsheba's suspect breast.

    PubMed

    Heijblom, Michelle; Meijer, Linda M; van Leeuwen, Ton G; Steenbergen, Wiendelt; Manohar, Srirang

    2014-05-01

    In 1654, Rembrandt van Rijn painted his famous painting Bathsheba at her Bath. Over the years, the depiction of Bathsheba's left breast and especially the presence of local discoloration, has generated debate on whether Rembrandt's Bathsheba suffered from breast cancer. Historical, medical and artistic arguments appeared to be not sufficient to prove if Bathsheba's model truly suffered from breast cancer. However, the bluish discoloration of the breast is an intriguing aspect from a biomedical optics point of view that might help us ending the old debate. By using Monte Carlo simulations in combination with the retinex theory of color vision, we showed that is highly unlikely that breast cancer results in a local bluish discoloration of the skin as is present on Bathsheba's breast. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Simulated Thin-Film Growth and Imaging

    NASA Astrophysics Data System (ADS)

    Schillaci, Michael

    2001-06-01

    Thin-films have become the cornerstone of the electronics, telecommunications, and broadband markets. A list of potential products includes: computer boards and chips, satellites, cell phones, fuel cells, superconductors, flat panel displays, optical waveguides, building and automotive windows, food and beverage plastic containers, metal foils, pipe plating, vision ware, manufacturing equipment and turbine engines. For all of these reasons a basic understanding of the physical processes involved in both growing and imaging thin-films can provide a wonderful research project for advanced undergraduate and first-year graduate students. After producing rudimentary two- and three-dimensional thin-film models incorporating ballsitic deposition and nearest neighbor Coulomb-type interactions, the QM tunneling equations are used to produce simulated scanning tunneling microscope (SSTM) images of the films. A discussion of computational platforms, languages, and software packages that may be used to accomplish similar results is also given.

  5. Real-time and accurate rail wear measurement method and experimental analysis.

    PubMed

    Liu, Zhen; Li, Fengjiao; Huang, Bangkui; Zhang, Guangjun

    2014-08-01

    When a train is running on uneven or curved rails, it generates violent vibrations on the rails. As a result, the light plane of the single-line structured light vision sensor is not vertical, causing errors in rail wear measurements (referred to as vibration errors in this paper). To avoid vibration errors, a novel rail wear measurement method is introduced in this paper, which involves three main steps. First, a multi-line structured light vision sensor (which has at least two linear laser projectors) projects a stripe-shaped light onto the inside of the rail. Second, the central points of the light stripes in the image are extracted quickly, and the three-dimensional profile of the rail is obtained based on the mathematical model of the structured light vision sensor. Then, the obtained rail profile is transformed from the measurement coordinate frame (MCF) to the standard rail coordinate frame (RCF) by taking the three-dimensional profile of the measured rail waist as the datum. Finally, rail wear constraint points are adopted to simplify the location of the rail wear points, and the profile composed of the rail wear points are compared with the standard rail profile in RCF to determine the rail wear. Both real data experiments and simulation experiments show that the vibration errors can be eliminated when the proposed method is used.

  6. Uranus: a rapid prototyping tool for FPGA embedded computer vision

    NASA Astrophysics Data System (ADS)

    Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.

    2007-01-01

    The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.

  7. In vitro strain measurements in cerebral aneurysm models for cyber-physical diagnosis.

    PubMed

    Shi, Chaoyang; Kojima, Masahiro; Anzai, Hitomi; Tercero, Carlos; Ikeda, Seiichi; Ohta, Makoto; Fukuda, Toshio; Arai, Fumihito; Najdovski, Zoran; Negoro, Makoto; Irie, Keiko

    2013-06-01

    The development of new diagnostic technologies for cerebrovascular diseases requires an understanding of the mechanism behind the growth and rupture of cerebral aneurysms. To provide a comprehensive diagnosis and prognosis of this disease, it is desirable to evaluate wall shear stress, pressure, deformation and strain in the aneurysm region, based on information provided by medical imaging technologies. In this research, we propose a new cyber-physical system composed of in vitro dynamic strain experimental measurements and computational fluid dynamics (CFD) simulation for the diagnosis of cerebral aneurysms. A CFD simulation and a scaled-up membranous silicone model of a cerebral aneurysm were completed, based on patient-specific data recorded in August 2008. In vitro blood flow simulation was realized with the use of a specialized pump. A vision system was also developed to measure the strain at different regions on the model by way of pulsating blood flow circulating inside the model. Experimental results show that distance and area strain maxima were larger near the aneurysm neck (0.042 and 0.052), followed by the aneurysm dome (0.023 and 0.04) and finally the main blood vessel section (0.01 and 0.014). These results were complemented by a CFD simulation for the addition of wall shear stress, oscillatory shear index and aneurysm formation index. Diagnosis results using imaging obtained in August 2008 are consistent with the monitored aneurysm growth in 2011. The presented study demonstrates a new experimental platform for measuring dynamic strain within cerebral aneurysms. This platform is also complemented by a CFD simulation for advanced diagnosis and prediction of the growth tendency of an aneurysm in endovascular surgery. Copyright © 2013 John Wiley & Sons, Ltd.

  8. SU-G-201-13: Investigation of Dose Variation Induced by HDR Ir-192 Source Global Shift Within the Varian Ring Applicator Using Monte Carlo Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y; Cai, J; Meltsner, S

    2016-06-15

    Purpose: The Varian tandem and ring applicators are used to deliver HDR Ir-192 brachytherapy for cervical cancer. The source path within the ring is hard to predict due to the larger interior ring lumen. Some studies showed the source could be several millimeters different from planned positions, while other studies demonstrated minimal dosimetric impact. A global shift can be applied to limit the effect of positioning offsets. The purpose of this study was to assess the necessities of implementing a global source shift using Monte Carlo (MC) simulations. Methods: The MCNP5 radiation transport code was used for all MC simulations.more » To accommodate TG-186 guidelines and eliminate inter-source attenuation, a BrachyVision plan with 10 dwell positions (0.5cm step sizes) was simulated as the summation of 10 individual sources with equal dwell times for simplification. To simplify the study, the tandem was also excluded from the MC model. Global shifts of ±0.1, ±0.3, ±0.5 cm were then simulated as distal and proximal from the reference positions. Dose was scored in water for all MC simulations and was normalized to 100% at the normalization point 0.5 cm from the cap in the ring plane. For dose comparison, Point A was 2 cm caudal from the buildup cap and 2 cm lateral on either side of the ring axis. With seventy simulations, 108 photon histories gave a statistical uncertainties (k=1) <2% for (0.1 cm)3 voxels. Results: Compared to no global shift, average Point A doses were 0.0%, 0.4%, and 2.2% higher for distal global shifts, and 0.4%, 2.8%, and 5.1% higher for proximal global shifts, respectively. The MC Point A doses differed by < 1% when compared to BrachyVision. Conclusion: Dose variations were not substantial for ±0.3 cm global shifts, which is common in clinical practice.« less

  9. Aerodynamic improvement of the assembly through which gas conduits are taken into a smoke stack by simulating gas flow on a computer

    NASA Astrophysics Data System (ADS)

    Prokhorov, V. B.; Fomenko, M. V.; Grigor'ev, I. V.

    2012-06-01

    Results from computer simulation of gas flow motion for gas conduits taken on one and two sides into the gas-removal shaft of a smoke stack with a constant cross section carried out using the SolidWorks and FlowVision application software packages are presented.

  10. Stereo Vision Inside Tire

    DTIC Science & Technology

    2015-08-21

    using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home

  11. Association between Visual Impairment and Low Vision and Sleep Duration and Quality among Older Adults in South Africa.

    PubMed

    Peltzer, Karl; Phaswana-Mafuya, Nancy

    2017-07-19

    This study aims to estimate the association between visual impairment and low vision and sleep duration and poor sleep quality in a national sample of older adults in South Africa. A national population-based cross-sectional Study of Global Ageing and Adults Health (SAGE) wave 1 was conducted in 2008 with a sample of 3840 individuals aged 50 years or older in South Africa. The interviewer-administered questionnaire assessed socio-demographic characteristics, health variables, sleep duration, quality, visual impairment, and vision. Results indicate that 10.0% of the sample reported short sleep duration (≤5 h), 46.6% long sleep (≥9 h), 9.3% poor sleep quality, 8.4% self-reported and visual impairment (near and/or far vision); and 43.2% measured low vision (near and/or far vision) (0.01-0.25 decimal) and 7.5% low vision (0.01-0.125 decimal). In fully adjusted logistic regression models, self-reported visual impairment was associated with short sleep duration and poor sleep quality, separately and together. Low vision was only associated with long sleep duration and poor sleep quality in unadjusted models. Self-reported visual impairment was related to both short sleep duration and poor sleep quality. Population data on sleep patterns may want to include visual impairment measures.

  12. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  13. Vision: A Conceptual Framework for School Counselors

    ERIC Educational Resources Information Center

    Watkinson, Jennifer Scaturo

    2013-01-01

    Vision is essential to the implementation of the American School Counselor Association (ASCA) National Model. Drawing from research in organizational leadership, this article provides a conceptual framework for how school counselors can incorporate vision as a strategy for implementing school counseling programs within the context of practice.…

  14. M&S Journal. Volume 7, Issue 1, Spring 2012

    DTIC Science & Technology

    2012-06-01

    Simulation Interoperability Workshops ( SIWs ) and the annual Interservice/Industry Training, Simulation & Education Conference (I/ITSEC), as well as...other venues. For example, a full-day workshop on the initial progress of the effort was conducted at the 2010 Spring SIW [2] to get feedback from the...the 2011 Fall SIW . 6. IMPROVING THE USE OF GATEWAYS AND BRIDGES FOR LVC SIMULATIONS The LVCAR Final Report [1] presented a vision for achieving

  15. Theory of compressive modeling and simulation

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Cha, Jae; Espinola, Richard L.; Krapels, Keith

    2013-05-01

    Modeling and Simulation (M&S) has been evolving along two general directions: (i) data-rich approach suffering the curse of dimensionality and (ii) equation-rich approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2nd order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10μm and 10~12μm) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one's neighborhood free .

  16. Common Analysis Tool Being Developed for Aeropropulsion: The National Cycle Program Within the Numerical Propulsion System Simulation Environment

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Naiman, Cynthia G.

    1999-01-01

    The NASA Lewis Research Center is developing an environment for analyzing and designing aircraft engines-the Numerical Propulsion System Simulation (NPSS). NPSS will integrate multiple disciplines, such as aerodynamics, structure, and heat transfer, and will make use of numerical "zooming" on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS uses the latest computing and communication technologies to capture complex physical processes in a timely, cost-effective manner. The vision of NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Through the NASA/Industry Cooperative Effort agreement, NASA Lewis and industry partners are developing a new engine simulation called the National Cycle Program (NCP). NCP, which is the first step toward NPSS and is its initial framework, supports the aerothermodynamic system simulation process for the full life cycle of an engine. U.S. aircraft and airframe companies recognize NCP as the future industry standard common analysis tool for aeropropulsion system modeling. The estimated potential payoff for NCP is a $50 million/yr savings to industry through improved engineering productivity.

  17. Walking simulator for evaluation of ophthalmic devices

    NASA Astrophysics Data System (ADS)

    Barabas, James; Woods, Russell L.; Peli, Eli

    2005-03-01

    Simulating mobility tasks in a virtual environment reduces risk for research subjects, and allows for improved experimental control and measurement. We are currently using a simulated shopping mall environment (where subjects walk on a treadmill in front of a large projected video display) to evaluate a number of ophthalmic devices developed at the Schepens Eye Research Institute for people with vision impairment, particularly visual field defects. We have conducted experiments to study subject's perception of "safe passing distance" when walking towards stationary obstacles. The subject's binary responses about potential collisions are analyzed by fitting a psychometric function, which gives an estimate of the subject's perceived safe passing distance, and the variability of subject responses. The system also enables simulations of visual field defects using head and eye tracking, enabling better understanding of the impact of visual field loss. Technical infrastructure for our simulated walking environment includes a custom eye and head tracking system, a gait feedback system to adjust treadmill speed, and a handheld 3-D pointing device. Images are generated by a graphics workstation, which contains a model with photographs of storefronts from an actual shopping mall, where concurrent validation experiments are being conducted.

  18. Cataract Vision Simulator

    MedlinePlus

    ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Academy Publications EyeNet Ophthalmology ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Find an Ophthalmologist Advanced ...

  19. Analysis of the resistive network in a bio-inspired CMOS vision chip

    NASA Astrophysics Data System (ADS)

    Kong, Jae-Sung; Sung, Dong-Kyu; Hyun, Hyo-Young; Shin, Jang-Kyoo

    2007-12-01

    CMOS vision chips for edge detection based on a resistive circuit have recently been developed. These chips help develop neuromorphic systems with a compact size, high speed of operation, and low power dissipation. The output of the vision chip depends dominantly upon the electrical characteristics of the resistive network which consists of a resistive circuit. In this paper, the body effect of the MOSFET for current distribution in a resistive circuit is discussed with a simple model. In order to evaluate the model, two 160×120 CMOS vision chips have been fabricated by using a standard CMOS technology. The experimental results have been nicely matched with our prediction.

  20. Literacy Leadership: Six Strategies for Peoplework

    ERIC Educational Resources Information Center

    McAndrew, Donald A.

    2005-01-01

    Become a successful literacy leader and improve the vision of literacy in the classroom, school, and community. This book's six proven strategies will help the reader do the "peoplework" at the heart of successful leadership: Creating and communicating a vision; Modeling that vision; Experimenting with new ideas and taking risks; Nurturing…

  1. Enhanced and Synthetic Vision for Terminal Maneuvering Area NextGen Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Norman, R. Michael; Williams, Steven P.; Arthur, Jarvis J., III; Shelton, Kevin J.; Prinzel, Lawrence J., III

    2011-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility ground (taxi) operations and approach/landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O Hare environment. Various scenarios tested the potential for EFVS for operations in visibility as low as 1000 ft runway visibility range (RVR) and SVS to enable lower decision heights (DH) than can currently be flown today. Expanding the EFVS visual segment from DH to the runway in visibilities as low as 1000 RVR appears to be viable as touchdown performance was excellent without any workload penalties noted for the EFVS concept tested. A lower DH to 150 ft and/or possibly reduced visibility minima by virtue of SVS equipage appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  2. Color Vision and the Railways: Part 1. The Railway LED Lantern Test.

    PubMed

    Dain, Stephen J; Casolin, Armand; Long, Jennifer; Hilmi, Mohd Radzi

    2015-02-01

    Lantern tests and practical tests are often used in the assessment of prospective railway employees. The lantern tests rarely embody the actual colors used in signaling on the railways. Practical tests have a number of problems, most notably consistency of application and practicability. This work was carried out to provide the Railway LED Lantern Test (RLLT) as a validated method of assessing the color vision of railway workers. The RLLT, a simulated practical test using the same LEDs (light-emitting diodes) as are used in modern railway signals, was developed. It was tested on 46 color vision-normal (CVN) and 37 color vision-deficient (CVD) subjects. A modified prototype was then tested on 106 CVN subjects. All 106 CVN subjects and most mildly affected CVD subjects passed the modified lantern at 3 m. At 6 m, 1 of the 106 normal color vision subjects failed by missing a single red light. All the CVD subjects failed. The RLLT carried out at 3 m allowed mildly affected CVD subjects to pass and demonstrate adequate color vision for the less demanding railway tasks. Carried out at 6 m, it essentially reinforced normal color vision as the standard. The RLLT is a simply administered test that has a direct link to the actual visual task of the rail worker. The RLLT lantern has been adopted as an approved test in the Australian National Standard for Health Assessment of Rail Safety Workers in place of a practical test. It has the potential to be a valid part of any railway color vision standard.

  3. Visual cognition

    PubMed Central

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  4. Are visual peripheries forever young?

    PubMed

    Burnat, Kalina

    2015-01-01

    The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.

  5. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  6. The Physics of Colour Vision.

    ERIC Educational Resources Information Center

    Goldman, Martin

    1985-01-01

    An elementary physical model of cone receptor cells is explained and applied to complexities of human color vision. One-, two-, and three-receptor systems are considered, with the later shown to be the best model for the human eye. Color blindness is also discussed. (DH)

  7. Automatic decoding of facial movements reveals deceptive pain expressions

    PubMed Central

    Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang

    2014-01-01

    Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830

  8. Perception-based synthetic cueing for night vision device rotorcraft hover operations

    NASA Astrophysics Data System (ADS)

    Bachelder, Edward N.; McRuer, Duane

    2002-08-01

    Helicopter flight using night-vision devices (NVDs) is difficult to perform, as evidenced by the high accident rate associated with NVD flight compared to day operation. The approach proposed in this paper is to augment the NVD image with synthetic cueing, whereby the cues would emulate position and motion and appear to be actually occurring in physical space on which they are overlaid. Synthetic cues allow for selective enhancement of perceptual state gains to match the task requirements. A hover cue set was developed based on an analogue of a physical target used in a flight handling qualities tracking task, a perceptual task analysis for hover, and fundamentals of human spatial perception. The display was implemented on a simulation environment, constructed using a virtual reality device, an ultrasound head-tracker, and a fixed-base helicopter simulator. Seven highly trained helicopter pilots were used as experimental subjects and tasked to maintain hover in the presence of aircraft positional disturbances while viewing a synthesized NVD environment and the experimental hover cues. Significant performance improvements were observed when using synthetic cue augmentation. This paper demonstrates that artificial magnification of perceptual states through synthetic cueing can be an effective method of improving night-vision helicopter hover operations.

  9. Analysis of the effects of non-supine sleeping positions on the stress, strain, deformation and intraocular pressure of the human eye

    NASA Astrophysics Data System (ADS)

    Volpe, Peter A.

    This thesis presents analytical models, finite element models and experimental data to investigate the response of the human eye to loads that can be experienced when in a non-supine sleeping position. The hypothesis being investigated is that non-supine sleeping positions can lead to stress, strain and deformation of the eye as well as changes in intraocular pressure (IOP) that may exacerbate vision loss in individuals who have glaucoma. To investigate the quasi-static changes in stress and internal pressure, a Fluid-Structure Interaction simulation was performed on an axisymmetrical model of an eye. Common Aerospace Engineering methods for analyzing pressure vessels and hyperelastic structural walls are applied to developing a suitable model. The quasi-static pressure increase was used in an iterative code to analyze changes in IOP over time.

  10. Retinal Detachment Vision Simulator

    MedlinePlus

    ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Academy Publications EyeNet Ophthalmology ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Find an Ophthalmologist Advanced ...

  11. Receptoral and Neural Aliasing.

    DTIC Science & Technology

    1993-01-30

    standard psychophysical methods. Stereoscoptc capability makes VisionWorks ideal for investigating and simulating strabismus and amblyopia , or developing... amblyopia . OElectrophyslological and psychophysical response to spatio-temporal and novel stimuli for investipttion of visual field deficits

  12. Application of Vision Metrology to In-Orbit Measurement of Large Reflector Onboard Communication Satellite for Next Generation Mobile Satellite Communication

    NASA Astrophysics Data System (ADS)

    Akioka, M.; Orikasa, T.; Satoh, M.; Miura, A.; Tsuji, H.; Toyoshima, M.; Fujino, Y.

    2016-06-01

    Satellite for next generation mobile satellite communication service with small personal terminal requires onboard antenna with very large aperture reflector larger than twenty meters diameter because small personal terminal with lower power consumption in ground base requires the large onboard reflector with high antenna gain. But, large deployable antenna will deform in orbit because the antenna is not a solid dish but the flexible structure with fine cable and mesh supported by truss. Deformation of reflector shape deteriorate the antenna performance and quality and stability of communication service. However, in case of digital beam forming antenna with phased array can modify the antenna beam performance due to adjustment of excitation amplitude and excitation phase. If we can measure the reflector shape precisely in orbit, beam pattern and antenna performance can be compensated with the updated excitation amplitude and excitation phase parameters optimized for the reflector shape measured every moment. Softbank Corporation and National Institute of Information and Communications Technology has started the project "R&D on dynamic beam control technique for next generation mobile communication satellite" as a contracted research project sponsored by Ministry of Internal Affairs and Communication of Japan. In this topic, one of the problem in vision metrology application is a strong constraints on geometry for camera arrangement on satellite bus with very limited space. On satellite in orbit, we cannot take many images from many different directions as ordinary vision metrology measurement and the available area for camera positioning is quite limited. Feasibility of vision metrology application and general methodology to apply to future mobile satellite communication satellite is to be found. Our approach is as follows: 1) Development of prototyping simulator to evaluate the expected precision for network design in zero order and first order 2) Trial measurement for large structure with similar dimension with large deployable reflector to confirm the validity of the network design and instrumentation. In this report, the overview of this R&D project and the results of feasibility study of network design based on simulations on vision metrology and beam pattern compensation of antenna with very large reflector in orbit is discussed. The feasibility of assumed network design for vision metrology and satisfaction of accuracy requirements are discussed. The feasibility of beam pattern compensation by using accurately measured reflector shape is confirmed with antenna pattern simulation for deformed parabola reflector. If reflector surface of communication satellite can be measured routinely in orbit, the antenna pattern can be compensated and maintain the high performance every moment.

  13. Interactive Tools for Measuring Visual Scanning Performance and Reaction Time.

    PubMed

    Brooks, Johnell; Seeanner, Julia; Hennessy, Sarah; Manganelli, Joseph; Crisler, Matthew; Rosopa, Patrick; Jenkins, Casey; Anderson, Michael; Drouin, Nathalie; Belle, Leah; Truesdail, Constance; Tanner, Stephanie

    Occupational therapists are constantly searching for engaging, high-technology interactive tasks that provide immediate feedback to evaluate and train clients with visual scanning deficits. This study examined the relationship between two tools: the VISION COACH™ interactive light board and the Functional Object Detection © (FOD) Advanced driving simulator scenario. Fifty-four healthy drivers, ages 21-66 yr, were divided into three age groups. Participants performed braking response and visual target (E) detection tasks of the FOD Advanced driving scenario, followed by two sets of three trials using the VISION COACH Full Field 60 task. Results showed no significant effect of age on FOD Advanced performance but a significant effect of age on VISION COACH performance. Correlations showed that participants' performance on both braking and E detection tasks were significantly positively correlated with performance on the VISION COACH (.37 < r < .40, p < .01). These tools provide new options for therapists. Copyright © 2017 by the American Occupational Therapy Association, Inc.

  14. Neck muscle activity in fighter pilots wearing night-vision equipment during simulated flight.

    PubMed

    Ang, Björn O; Kristoffersson, Mats

    2013-02-01

    Night-vision goggles (NVG) in jet fighter aircraft appear to increase the risk of neck strain due to increased neck loading. The present aim was, therefore, to evaluate the effect on neck-muscle activity and subjective ratings of head-worn night-vision (NV) equipment in controlled simulated flights. Five experienced fighter pilots twice flew a standardized 2.5-h program in a dynamic flight simulator; one session with NVG and one with standard helmet mockup (control session). Each session commenced with a 1-h simulation at 1 Gz followed by a 1.5-h dynamic flight with repeated Gz profiles varying between 3 and 7 Gz and including aerial combat maneuvers (ACM) at 3-5 Gz. Large head-and-neck movements under high G conditions were avoided. Surface electromyographic (EMG) data was simultaneously measured bilaterally from anterior neck, upper and lower posterior neck, and upper shoulder muscles. EMG activity was normalized as the percentage of pretest maximal voluntary contraction (%MVC). Head-worn equipment (helmet comfort, balance, neck mobility, and discomfort) was rated subjectively immediately after flight. A trend emerged toward greater overall neck muscle activity in NV flight during sustained ACM episodes (10% vs. 8% MVC for the control session), but with no such effects for temporary 3-7 Gz profiles. Postflight ratings for NV sessions emerged as "unsatisfactory" for helmet comfort/neck discomfort. However, this was not significant compared to the control session. Helmet mounted NV equipment caused greater neck muscle activity during sustained combat maneuvers, indicating increased muscle strain due to increased neck loading. In addition, postflight ratings indicated neck discomfort after NV sessions, although not clearly increased compared to flying with standard helmet mockup.

  15. Scorpion Hybrid Optical-based Inertial Tracker (HObIT) test results

    NASA Astrophysics Data System (ADS)

    Atac, Robert; Spink, Scott; Calloway, Tom; Foxlin, Eric

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  16. Invariant visual object recognition and shape processing in rats

    PubMed Central

    Zoccolan, Davide

    2015-01-01

    Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision. PMID:25561421

  17. Image gathering and processing - Information and fidelity

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Halyo, N.; Samms, R. W.; Stacy, K.

    1985-01-01

    In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, combining optical design with image-forming and edge-detection algorithms. The optical design of the image-gathering system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formulations account for. Performance analyses and simulations for ordinary optical-design constraints and random scences indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally optimized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision.

  18. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    NASA Astrophysics Data System (ADS)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  19. Correlation of spatial intensity distribution of light reaching the retina and restoration of vision by optogenetic stimulation

    NASA Astrophysics Data System (ADS)

    Shivalingaiah, Shivaranjani; Gu, Ling; Mohanty, Samarendra K.

    2011-03-01

    Stimulation of retinal neuronal cells using optogenetics via use of chanelrhodopsin-2 (ChR2) and blue light has opened up a new direction for restoration of vision with respect to treatment of Retinitis pigmentosa (RP). In addition to delivery of ChR2 to specific retinal layer using genetic engineering, threshold level of blue light needs to be delivered onto the retina for generating action potential and successful behavioral outcome. We report measurement of intensity distribution of light reaching the retina of Retinitis pigmentosa (RP) mouse models and compared those results with theoretical simulations of light propagation in eye. The parameters for the stimulating source positioning in front of eye was determined for optimal light delivery to the retina. In contrast to earlier viral method based delivery of ChR2 onto retinal ganglion cells, in-vivo electroporation method was employed for retina-transfection of RP mice. The behavioral improvement in mice with Thy1-ChR2-YFP transfected retina, expressing ChR2 in retinal ganglion cells, was found to correlate with stimulation intensity.

  20. The impact on midlevel vision of statistically optimal divisive normalization in V1.

    PubMed

    Coen-Cagli, Ruben; Schwartz, Odelia

    2013-07-15

    The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality.

  1. Power and Vision: Group-Process Models Evolving from Social-Change Movements.

    ERIC Educational Resources Information Center

    Morrow, Susan L.; Hawxhurst, Donna M.

    1988-01-01

    Explores evolution of group process in social change movements, including the evolution of the new left, the cooperative movement,and the women's liberation movement. Proposes a group-process model that encourages people to share power and live their visions. (Author/NB)

  2. Poor Vision, Functioning, and Depressive Symptoms: A Test of the Activity Restriction Model

    ERIC Educational Resources Information Center

    Bookwala, Jamila; Lawson, Brendan

    2011-01-01

    Purpose: This study tested the applicability of the activity restriction model of depressed affect to the context of poor vision in late life. This model hypothesizes that late-life stressors contribute to poorer mental health not only directly but also indirectly by restricting routine everyday functioning. Method: We used data from a national…

  3. Inspiration and Intellect: Significant Learning in Musical Forms and Analysis

    ERIC Educational Resources Information Center

    Kelley, Bruce C.

    2009-01-01

    In his book "Creating Significant Learning Experiences" (2003), Dee Fink challenges professors to create a deep vision for the courses they teach. Educators often have a vision for what their courses could be, but often lack a model for instituting change. Fink's book provides that model. In this article, the author describes how this model helped…

  4. Non-Proliferative Diabetic Retinopathy Vision Simulator

    MedlinePlus

    ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Academy Publications EyeNet Ophthalmology ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Find an Ophthalmologist Advanced ...

  5. Colour vision deficiency.

    PubMed

    Simunovic, M P

    2010-05-01

    Colour vision deficiency is one of the commonest disorders of vision and can be divided into congenital and acquired forms. Congenital colour vision deficiency affects as many as 8% of males and 0.5% of females--the difference in prevalence reflects the fact that the commonest forms of congenital colour vision deficiency are inherited in an X-linked recessive manner. Until relatively recently, our understanding of the pathophysiological basis of colour vision deficiency largely rested on behavioural data; however, modern molecular genetic techniques have helped to elucidate its mechanisms. The current management of congenital colour vision deficiency lies chiefly in appropriate counselling (including career counselling). Although visual aids may be of benefit to those with colour vision deficiency when performing certain tasks, the evidence suggests that they do not enable wearers to obtain normal colour discrimination. In the future, gene therapy remains a possibility, with animal models demonstrating amelioration following treatment.

  6. Technology for robotic surface inspection in space

    NASA Technical Reports Server (NTRS)

    Volpe, Richard; Balaram, J.

    1994-01-01

    This paper presents on-going research in robotic inspection of space platforms. Three main areas of investigation are discussed: machine vision inspection techniques, an integrated sensor end-effector, and an orbital environment laboratory simulation. Machine vision inspection utilizes automatic comparison of new and reference images to detect on-orbit induced damage such as micrometeorite impacts. The cameras and lighting used for this inspection are housed in a multisensor end-effector, which also contains a suite of sensors for detection of temperature, gas leaks, proximity, and forces. To fully test all of these sensors, a realistic space platform mock-up has been created, complete with visual, temperature, and gas anomalies. Further, changing orbital lighting conditions are effectively mimicked by a robotic solar simulator. In the paper, each of these technology components will be discussed, and experimental results are provided.

  7. Visual-conformal display format for helicopter guidance

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Schmerwitz, Sven; Lueken, Thomas

    2014-06-01

    Helicopter guidance in situations where natural vision is reduced is still a challenging task. Beside new available sensors, which are able to "see" through darkness, fog and dust, display technology remains one of the key issues of pilot assistance systems. As long as we have pilots within aircraft cockpits, we have to keep them informed about the outside situation. "Situational awareness" of humans is mainly powered by their visual channel. Therefore, display systems which are able to cross-fade seamless from natural vision to artificial computer vision and vice versa, are of greatest interest within this context. Helmet-mounted displays (HMD) have this property when they apply a head-tracker for measuring the pilot's head orientation relative to the aircraft reference frame. Together with the aircraft's position and orientation relative to the world's reference frame, the on-board graphics computer can generate images which are perfectly aligned with the outside world. We call image elements which match the outside world, "visual-conformal". Published display formats for helicopter guidance in degraded visual environment apply mostly 2D-symbologies which stay far behind from what is possible. We propose a perspective 3D-symbology for a head-tracked HMD which shows as much as possible visual-conformal elements. We implemented and tested our proposal within our fixed based cockpit simulator as well as in our flying helicopter simulator (FHS). Recently conducted simulation trials with experienced helicopter pilots give some first evaluation results of our proposal.

  8. An Emphasis on Perception: Teaching Image Formation Using a Mechanistic Model of Vision.

    ERIC Educational Resources Information Center

    Allen, Sue; And Others

    An effective way to teach the concept of image is to give students a model of human vision which incorporates a simple mechanism of depth perception. In this study two almost identical versions of a curriculum in geometrical optics were created. One used a mechanistic, interpretive eye model, and in the other the eye was modeled as a passive,…

  9. GRAM Series of Atmospheric Models for Aeroentry and Aeroassist

    NASA Technical Reports Server (NTRS)

    Duvall, Aleta; Justus, C. G.; Keller, Vernon W.

    2005-01-01

    The eight destinations in the Solar System with sufficient atmosphere for either aeroentry or aeroassist, including aerocapture, are: Venus, Earth, Mars, Jupiter, Saturn; Uranus. and Neptune, and Saturn's moon Titan. Engineering-level atmospheric models for four of these (Earth, Mars, Titan, and Neptune) have been developed for use in NASA's systems analysis studies of aerocapture applications in potential future missions. Work has recently commenced on development of a similar atmospheric model for Venus. This series of MSFC-sponsored models is identified as the Global Reference Atmosphere Model (GRAM) series. An important capability of all of the models in the GRAM series is their ability to simulate quasi-random perturbations for Monte Carlo analyses in developing guidance, navigation and control algorithms, and for thermal systems design. Example applications for Earth aeroentry and Mars aerocapture systems analysis studies are presented and illustrated. Current and planned updates to the Earth and Mars atmospheric models, in support of NASA's new exploration vision, are also presented.

  10. Evidence of Long Range Dependence and Self-similarity in Urban Traffic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thakur, Gautam S; Helmy, Ahmed; Hui, Pan

    2015-01-01

    Transportation simulation technologies should accurately model traffic demand, distribution, and assignment parame- ters for urban environment simulation. These three param- eters significantly impact transportation engineering bench- mark process, are also critical in realizing realistic traffic modeling situations. In this paper, we model and charac- terize traffic density distribution of thousands of locations around the world. The traffic densities are generated from millions of images collected over several years and processed using computer vision techniques. The resulting traffic den- sity distribution time series are then analyzed. It is found using the goodness-of-fit test that the traffic density dis- tributions follows heavy-tailmore » models such as Log-gamma, Log-logistic, and Weibull in over 90% of analyzed locations. Moreover, a heavy-tail gives rise to long-range dependence and self-similarity, which we studied by estimating the Hurst exponent (H). Our analysis based on seven different Hurst estimators strongly indicate that the traffic distribution pat- terns are stochastically self-similar (0.5 H 1.0). We believe this is an important finding that will influence the design and development of the next generation traffic simu- lation techniques and also aid in accurately modeling traffic engineering of urban systems. In addition, it shall provide a much needed input for the development of smart cities.« less

  11. Focal damage to macaque photoreceptors produces persistent visual loss

    PubMed Central

    Strazzeri, Jennifer M.; Hunter, Jennifer J.; Masella, Benjamin D.; Yin, Lu; Fischer, William S.; DiLoreto, David A.; Libby, Richard T.; Williams, David R.; Merigan, William H.

    2014-01-01

    Insertion of light-gated channels into inner retina neurons restores neural light responses, light evoked potentials, visual optomotor responses and visually-guided maze behavior in mice blinded by retinal degeneration. This method of vision restoration bypasses damaged outer retina, providing stimulation directly to retinal ganglion cells in inner retina. The approach is similar to that of electronic visual protheses, but may offer some advantages, such as avoidance of complex surgery and direct targeting of many thousands of neurons. However, the promise of this technique for restoring human vision remains uncertain because rodent animal models, in which it has been largely developed, are not ideal for evaluating visual perception. On the other hand, psychophysical vision studies in macaque can be used to evaluate different approaches to vision restoration in humans. Furthermore, it has not been possible to test vision restoration in macaques, the optimal model for human-like vision, because there has been no macaque model of outer retina degeneration. In this study, we describe development of a macaque model of photoreceptor degeneration that can in future studies be used to test restoration of perception by visual prostheses. Our results show that perceptual deficits caused by focal light damage are restricted to locations at which photoreceptors are damaged, that optical coherence tomography (OCT) can be used to track such lesions, and that adaptive optics retinal imaging, which we recently used for in vivo recording of ganglion cell function, can be used in future studies to examine these lesions. PMID:24316158

  12. Extensive rill erosion and gullying on abandoned pit mining sites in Lusatia, Germany

    NASA Astrophysics Data System (ADS)

    Kunth, Franziska; Kaiser, Andreas; Vláčilová, Markéta; Schindewolf, Marcus; Schmidt, Jürgen

    2015-04-01

    As the major economic driver in the province of Lusatia, Eastern Germany, the large open-cast lignite mining sites characterize the landscape and leave vast areas of irreversible changed post-mining landscapes behind. Cost-intensive renaturation projects have been implemented in order to restructure former mine sites into stabile self-sustaining ecosystems and local recreation areas. With considerable expenditure the pits are stabilized, flooded and surrounding areas are restructured. Nevertheless, heavy soil erosion, extensive gullying and slope instability are challenges for the restructuring and renaturation of the abandoned open-cast mining sites. The majority of the sites remain inaccessible to the public due to instable conditions resulting in uncontrolled slides and large gullies. In this study a combined approach of UAV-based aerial imagery, 3D multi-vision surface reconstruction and physically-based soil erosion modelling is carried out in order to document, quantify and better understand the causes of erosion processes on mining sites. Rainfall simulations have been carried out in lausatian post mining areas to reproduce soil detachment processes and observe the responsible mechanisms for the considerable erosion rates. Water repellency and soil sealing by biological crusts were hindering infiltration and consequently increasing runoff rates despite the mainly sandy soil texture. On non-vegetated experimental plots runoff coefficients up to 87 % were measured. In a modelling routine for a major gully catchment regarding a 50 years rainfall event, simulation results reveal runoff coefficients of up to 84% and erosion rates of 118 Mg*ha^-1. At the sediment pass over point 450Mg of sediments enter the surface water bodies. A system response of this order of magnitude were unexpected by the authorities. By applying 3D multi-vision surface reconstruction a model validation is now possible and further may illustrate the great importance of soil conservation measures under the described conditions.

  13. Investigating industrial investigation: examining the impact of a priori knowledge and tunnel vision education.

    PubMed

    Maclean, Carla L; Brimacombe, C A Elizabeth; Lindsay, D Stephen

    2013-12-01

    The current study addressed tunnel vision in industrial incident investigation by experimentally testing how a priori information and a human bias (generated via the fundamental attribution error or correspondence bias) affected participants' investigative behavior as well as the effectiveness of a debiasing intervention. Undergraduates and professional investigators engaged in a simulated industrial investigation exercise. We found that participants' judgments were biased by knowledge about the safety history of either a worker or piece of equipment and that a human bias was evident in participants' decision making. However, bias was successfully reduced with "tunnel vision education." Professional investigators demonstrated a greater sophistication in their investigative decision making compared to undergraduates. The similarities and differences between these two populations are discussed. (c) 2013 APA, all rights reserved

  14. Increasing the Transfer of Simulation Technology from R&D into School Settings: An Approach to Evaluation from Overarching Vision to Individual Artifact in Education

    ERIC Educational Resources Information Center

    Blasi, Laura; Alfonso, Berta

    2006-01-01

    Building and evaluating artifacts specifically for K-12 education, technologists committed to design sciences are needed along with an approach to evaluation increasing the systemic transfer from research and development into school settings. The authors describe THE VIRTUAL LAB scanning electronic microscope simulation, including (a) its…

  15. Manufactured Porous Ambient Surface Simulants

    NASA Technical Reports Server (NTRS)

    Carey, Elizabeth M.; Peters, Gregory H.; Chu, Lauren; Zhou, Yu Meng; Cohen, Brooklin; Panossian, Lara; Green, Jacklyn R.; Moreland, Scott; Backes, Paul

    2016-01-01

    The planetary science decadal survey for 2013-2022 (Vision and Voyages, NRC 2011) has promoted mission concepts for sample acquisition from small solar system bodies. Numerous comet-sampling tools are in development to meet this standard. Manufactured Porous Ambient Surface Simulants (MPASS) materials provide an opportunity to simulate variable features at ambient temperatures and pressures to appropriately test potential sample acquisition systems for comets, asteroids, and planetary surfaces. The original "flavor" of MPASS materials is known as Manufactured Porous Ambient Comet Simulants (MPACS), which was developed in parallel with the development of the Biblade Comet Sampling System (Backes et al., in review). The current suite of MPACS materials was developed through research of the physical and mechanical properties of comets from past comet missions results and modeling efforts, coordination with the science community at the Jet Propulsion Laboratory and testing of a wide range of materials and formulations. These simulants were required to represent the physical and mechanical properties of cometary nuclei, based on the current understanding of the science community. Working with cryogenic simulants can be tedious and costly; thus MPACS is a suite of ambient simulants that yields a brittle failure mode similar to that of cryogenic icy materials. Here we describe our suite of comet simulants known as MPACS that will be used to test and validate the Biblade Comet Sampling System (Backes et al., in review).

  16. Simulation of thalamic prosthetic vision: reading accuracy, speed, and acuity in sighted humans.

    PubMed

    Vurro, Milena; Crowell, Anne Marie; Pezaris, John S

    2014-01-01

    The psychophysics of reading with artificial sight has received increasing attention as visual prostheses are becoming a real possibility to restore useful function to the blind through the coarse, pseudo-pixelized vision they generate. Studies to date have focused on simulating retinal and cortical prostheses; here we extend that work to report on thalamic designs. This study examined the reading performance of normally sighted human subjects using a simulation of three thalamic visual prostheses that varied in phosphene count, to help understand the level of functional ability afforded by thalamic designs in a task of daily living. Reading accuracy, reading speed, and reading acuity of 20 subjects were measured as a function of letter size, using a task based on the MNREAD chart. Results showed that fluid reading was feasible with appropriate combinations of letter size and phosphene count, and performance degraded smoothly as font size was decreased, with an approximate doubling of phosphene count resulting in an increase of 0.2 logMAR in acuity. Results here were consistent with previous results from our laboratory. Results were also consistent with those from the literature, despite using naive subjects who were not trained on the simulator, in contrast to other reports.

  17. A review of flight simulation techniques

    NASA Astrophysics Data System (ADS)

    Baarspul, Max

    After a brief historical review of the evolution of flight simulation techniques, this paper first deals with the main areas of flight simulator applications. Next, it describes the main components of a piloted flight simulator. Because of the presence of the pilot-in-the-loop, the digital computer driving the simulator must solve the aircraft equations of motion in ‘real-time’. Solutions to meet the high required computer power of todays modern flight simulator are elaborated. The physical similarity between aircraft and simulator in cockpit layout, flight instruments, flying controls etc., is discussed, based on the equipment and environmental cue fidelity required for training and research simulators. Visual systems play an increasingly important role in piloted flight simulation. The visual systems now available and most widely used are described, where image generators and display devices will be distinguished. The characteristics of out-of-the-window visual simulation systems pertaining to the perceptual capabilities of human vision are discussed. Faithful reproduction of aircraft motion requires large travel, velocity and acceleration capabilities of the motion system. Different types and applications of motion systems in e.g. airline training and research are described. The principles of motion cue generation, based on the characteristics of the non-visual human motion sensors, are described. The complete motion system, consisting of the hardware and the motion drive software, is discussed. The principles of mathematical modelling of the aerodynamic, flight control, propulsion, landing gear and environmental characteristics of the aircraft are reviewed. An example of the identification of an aircraft mathematical model, based on flight and taxi tests, is presented. Finally, the paper deals with the hardware and software integration of the flight simulator components and the testing and acceptance of the complete flight simulator. Examples of the so-called ‘Computer Generated Checkout’ and ‘Proof of Match’ are presented. The concluding remarks briefly summarize the status of flight simulator technology and consider possibilities for future research.

  18. Modeling of the First Layers in the Fly's Eye

    NASA Technical Reports Server (NTRS)

    Moya, J. A.; Wilcox, M. J.; Donohoe, G. W.

    1997-01-01

    Increased autonomy of robots would yield significant advantages in the exploration of space. The shortfalls of computer vision can, however, pose significant limitations on a robot's potential. At the same time, simple insects which are largely hard-wired have effective visual systems. The understanding of insect vision systems thus may lead to improved approaches to visual tasks. A good starting point for the study of a vision system is its eye. In this paper, a model of the sensory portion of the fly's eye is presented. The effectiveness of the model is briefly addressed by a comparison of its performance to experimental data.

  19. Modeling and Measuring the Structure of Professional Vision in Preservice Teachers

    ERIC Educational Resources Information Center

    Seidel, Tina; Stürmer, Kathleen

    2014-01-01

    Professional vision has been identified as an important element of teacher expertise that can be developed in teacher education. It describes the use of knowledge to notice and interpret significant features of classroom situations. Three aspects of professional vision have been described by qualitative research: describe, explain, and predict…

  20. Children with Developmental Coordination Disorder Benefit from Using Vision in Combination with Touch Information for Quiet Standing

    PubMed Central

    Bair, Woei-Nan; Barela, José A.; Whitall, Jill; Jeka, John J.; Clark, Jane E.

    2011-01-01

    In two experiments, the ability to use multisensory information (haptic information, provided by lightly touching a stationary surface, and vision) for quiet standing was examined in typically developing (TD) children, adults, and in 7-year-old children with Developmental Coordination Disorder (DCD). Four sensory conditions (no touch/no vision, with touch/no vision, no touch/with vision, and with touch/with vision) were employed. In experiment 1, we tested 4-, 6- and 8-year-old TD children and adults to provide a developmental landscape for performance on this task. In experiment 2, we tested a group of 7-year-old children with DCD and their age-matched TD peers. For all groups, touch robustly attenuated standing sway suggesting that children as young as 4 years old use touch information similarly to adults. Touch was less effective in children with DCD compared to their TD peers, especially in attenuating their sway velocity. Children with DCD, unlike their TD peers, also benefited from using vision to reduce sway. The present results suggest that children with DCD benefit from using vision in combination with touch information for standing control possibly due to their less well developed internal models of body orientation and self-motion. Internal model deficits, combined with other known deficits such as postural muscles activation timing deficits, may exacerbate the balance impairment in children with DCD. PMID:21571533

  1. Characterization of Stereo Vision Performance for Roving at the Lunar Poles

    NASA Technical Reports Server (NTRS)

    Wong, Uland; Nefian, Ara; Edwards, Larry; Furlong, Michael; Bouyssounouse, Xavier; To, Vinh; Deans, Matthew; Cannon, Howard; Fong, Terry

    2016-01-01

    Surface rover operations at the polar regions of airless bodies, particularly the Moon, are of particular interest to future NASA science missions such as Resource Prospector (RP). Polar optical conditions present challenges to conventional imaging techniques, with repercussions to driving, safeguarding and science. High dynamic range, long cast shadows, opposition and white out conditions are all significant factors in appearance. RP is currently undertaking an effort to characterize stereo vision performance in polar conditions through physical laboratory experimentation with regolith simulants, obstacle distributions and oblique lighting.

  2. A Vision on the Status and Evolution of HEP Physics Software Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canal, P.; Elvira, D.; Hatcher, R.

    2013-07-28

    This paper represents the vision of the members of the Fermilab Scientific Computing Division's Computational Physics Department (SCD-CPD) on the status and the evolution of various HEP software tools such as the Geant4 detector simulation toolkit, the Pythia and GENIE physics generators, and the ROOT data analysis framework. The goal of this paper is to contribute ideas to the Snowmass 2013 process toward the composition of a unified document on the current status and potential evolution of the physics software tools which are essential to HEP.

  3. Report for Task 8.4: Development of Control Room Layout Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, Robert

    Idaho National Laboratory (INL) has contracted Institutt for Energiteknikk (IFE) to support in the development of an end state vision for the US Nuclear industry and in particular for a utility that is currently moving forward with a control room modernization project. This support includes the development of an Overview display and technical support in conducting an operational study. Development of operational scenarios to be conducted using a full scope simulator at the INL HSSL. Additionally IFE will use the CREATE modelling tool to provide 3-D views of the potential and possible end state view after the completion of digitalmore » upgrade project.« less

  4. The Application of Leap Motion in Astronaut Virtual Training

    NASA Astrophysics Data System (ADS)

    Qingchao, Xie; Jiangang, Chao

    2017-03-01

    With the development of computer vision, virtual reality has been applied in astronaut virtual training. As an advanced optic equipment to track hand, Leap Motion can provide precise and fluid tracking of hands. Leap Motion is suitable to be used as gesture input device in astronaut virtual training. This paper built an astronaut virtual training based Leap Motion, and established the mathematics model of hands occlusion. At last the ability of Leap Motion to handle occlusion was analysed. A virtual assembly simulation platform was developed for astronaut training, and occlusion gesture would influence the recognition process. The experimental result can guide astronaut virtual training.

  5. Piloting Augmented Reality Technology to Enhance Realism in Clinical Simulation.

    PubMed

    Vaughn, Jacqueline; Lister, Michael; Shaw, Ryan J

    2016-09-01

    We describe a pilot study that incorporated an innovative hybrid simulation designed to increase the perception of realism in a high-fidelity simulation. Prelicensure students (N = 12) cared for a manikin in a simulation lab scenario wearing Google Glass, a wearable head device that projected video into the students' field of vision. Students reported that the simulation gave them confidence that they were developing skills and knowledge to perform necessary tasks in a clinical setting and that they met the learning objectives of the simulation. The video combined visual images and cues seen in a real patient and created a sense of realism the manikin alone could not provide.

  6. Comparative effects of pH and Vision herbicide on two life stages of four anuran amphibian species.

    PubMed

    Edginton, Andrea N; Sheridan, Patrick M; Stephenson, Gerald R; Thompson, Dean G; Boermans, Herman J

    2004-04-01

    Vision, a glyphosate-based herbicide containing a 15% (weight:weight) polyethoxylated tallow amine surfactant blend, and the concurrent factor of pH were tested to determine their interactive effects on early life-stage anurans. Ninety-six-hour laboratory static renewal studies, using the embryonic and larval life stages (Gosner 25) of Rana clamitans, R. pipiens, Bufo americanus, and Xenopus laevis, were performed under a central composite rotatable design. Mortality and the prevalence of malformations were modeled using generalized linear models with a profile deviance approach for obtaining confidence intervals. There was a significant (p < 0.05) interaction of pH with Vision concentration in all eight models, such that the toxicity of Vision was amplified by elevated pH. The surfactant is the major toxic component of Vision and is hypothesized, in this study, to be the source of the pH interaction. Larvae of B. americanus and R. clamitans were 1.5 to 3.8 times more sensitive than their corresponding embryos, whereas X. laevis and R. pipiens larvae were 6.8 to 8.9 times more sensitive. At pH values above 7.5, the Vision concentrations expected to kill 50% of the test larvae in 96-h (96-h lethal concentration [LC50]) were predicted to be below the expected environmental concentration (EEC) as calculated by Canadian regulatory authorities. The EEC value represents a worst-case scenario for aerial Vision application and is calculated assuming an application of the maximum label rate (2.1 kg acid equivalents [a.e.]/ha) into a pond 15 cm in depth. The EEC of 1.4 mg a.e./L (4.5 mg/L Vision) was not exceeded by 96-h LC50 values for the embryo test. The larvae of the four species were comparable in sensitivity. Field studies should be completed using the more sensitive larval life stage to test for Vision toxicity at actual environmental concentrations.

  7. Sand waves in environmental flows: Insights gained by coupling large-eddy simulation with morphodynamics

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, Fotis; Khosronejad, Ali

    2016-02-01

    Sand waves arise in subaqueous and Aeolian environments as the result of the complex interaction between turbulent flows and mobile sand beds. They occur across a wide range of spatial scales, evolve at temporal scales much slower than the integral scale of the transporting turbulent flow, dominate river morphodynamics, undermine streambank stability and infrastructure during flooding, and sculpt terrestrial and extraterrestrial landscapes. In this paper, we present the vision for our work over the last ten years, which has sought to develop computational tools capable of simulating the coupled interactions of sand waves with turbulence across the broad range of relevant scales: from small-scale ripples in laboratory flumes to mega-dunes in large rivers. We review the computational advances that have enabled us to simulate the genesis and long-term evolution of arbitrarily large and complex sand dunes in turbulent flows using large-eddy simulation and summarize numerous novel physical insights derived from our simulations. Our findings explain the role of turbulent sweeps in the near-bed region as the primary mechanism for destabilizing the sand bed, show that the seeds of the emergent structure in dune fields lie in the heterogeneity of the turbulence and bed shear stress fluctuations over the initially flatbed, and elucidate how large dunes at equilibrium give rise to energetic coherent structures and modify the spectra of turbulence. We also discuss future challenges and our vision for advancing a data-driven simulation-based engineering science approach for site-specific simulations of river flooding.

  8. The layered sensing operations center: a modeling and simulation approach to developing complex ISR networks

    NASA Astrophysics Data System (ADS)

    Curtis, Christopher; Lenzo, Matthew; McClure, Matthew; Preiss, Bruce

    2010-04-01

    In order to anticipate the constantly changing landscape of global warfare, the United States Air Force must acquire new capabilities in the field of Intelligence, Surveillance, and Reconnaissance (ISR). To meet this challenge, the Air Force Research Laboratory (AFRL) is developing a unifying construct of "Layered Sensing" which will provide military decision-makers at all levels with the timely, actionable, and trusted information necessary for complete battlespace awareness. Layered Sensing is characterized by the appropriate combination of sensors and platforms (including those for persistent sensing), infrastructure, and exploitation capabilities to enable this synergistic awareness. To achieve the Layered Sensing vision, AFRL is pursuing a Modeling & Simulation (M&S) strategy through the Layered Sensing Operations Center (LSOC). An experimental ISR system-of-systems test-bed, the LSOC integrates DoD standard simulation tools with commercial, off-the-shelf video game technology for rapid scenario development and visualization. These tools will help facilitate sensor management performance characterization, system development, and operator behavioral analysis. Flexible and cost-effective, the LSOC will implement a non-proprietary, open-architecture framework with well-defined interfaces. This framework will incentivize the transition of current ISR performance models to service-oriented software design for maximum re-use and consistency. This paper will present the LSOC's development and implementation thus far as well as a summary of lessons learned and future plans for the LSOC.

  9. Facial recognition using simulated prosthetic pixelized vision.

    PubMed

    Thompson, Robert W; Barnett, G David; Humayun, Mark S; Dagnelie, Gislin

    2003-11-01

    To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition. A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast. Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials. These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis.

  10. [Manufacture method and clinical application of minimally invasive dental implant guide template based on registration technology].

    PubMed

    Lin, Zeming; He, Bingwei; Chen, Jiang; D u, Zhibin; Zheng, Jingyi; Li, Yanqin

    2012-08-01

    To guide doctors in precisely positioning surgical operation, a new production method of minimally invasive implant guide template was presented. The mandible of patient was scanned by CT scanner, and three-dimensional jaw bone model was constructed based on CT images data The professional dental implant software Simplant was used to simulate the plant based on the three-dimensional CT model to determine the location and depth of implants. In the same time, the dental plaster models were scanned by stereo vision system to build the oral mucosa model. Next, curvature registration technology was used to fuse the oral mucosa model and the CT model, then the designed position of implant in the oral mucosa could be determined. The minimally invasive implant guide template was designed in 3-Matic software according to the design position of implant and the oral mucosa model. Finally, the template was produced by rapid prototyping. The three-dimensional registration technology was useful to fuse the CT data and the dental plaster data, and the template was accurate that could provide the doctors a guidance in the actual planting without cut-off mucosa. The guide template which fabricated by comprehensive utilization of three-dimensional registration, Simplant simulation and rapid prototyping positioning are accurate and can achieve the minimally invasive and accuracy implant surgery, this technique is worthy of clinical use.

  11. A novel upper limb rehabilitation system with self-driven virtual arm illusion.

    PubMed

    Aung, Yee Mon; Al-Jumaily, Adel; Anam, Khairul

    2014-01-01

    This paper proposes a novel upper extremity rehabilitation system with virtual arm illusion. It aims for fast recovery from lost functions of the upper limb as a result of stroke to provide a novel rehabilitation system for paralyzed patients. The system is integrated with a number of technologies that include Augmented Reality (AR) technology to develop game like exercise, computer vision technology to create the illusion scene, 3D modeling and model simulation, and signal processing to detect user intention via EMG signal. The effectiveness of the developed system has evaluated via usability study and questionnaires which is represented by graphical and analytical methods. The evaluation provides with positive results and this indicates the developed system has potential as an effective rehabilitation system for upper limb impairment.

  12. Increased intracranial pressure in mini-pigs exposed to simulated solar particle event radiation

    NASA Astrophysics Data System (ADS)

    Sanzari, Jenine K.; Muehlmatt, Amy; Savage, Alexandria; Lin, Liyong; Kennedy, Ann R.

    2014-02-01

    Changes in intracranial pressure (ICP) during space flight have stimulated an area of research in space medicine. It is widely speculated that elevations in ICP contribute to structural and functional ocular changes, including deterioration in vision, which is also observed during space flight. The aim of this study was to investigate changes in opening pressure (OP) occurring as a result of ionizing radiation exposure (at doses and dose-rates relevant to solar particle event radiation). We used a large animal model, the Yucatan mini-pig, and were able to obtain measurements over a 90 day period. This is the first investigation to show long term recordings of ICP in a large animal model without an invasive craniotomy procedure. Further, this is the first investigation reporting increased ICP after radiation exposure.

  13. Separation of presampling and postsampling modulation transfer functions in infrared sensor systems

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Olson, Jeffrey T.; O'Shea, Patrick D.; Hodgkin, Van A.; Jacobs, Eddie L.

    2006-05-01

    New methods of measuring the modulation transfer function (MTF) of electro-optical sensor systems are investigated. These methods are designed to allow the separation and extraction of presampling and postsampling components from the total system MTF. The presampling MTF includes all the effects prior to the sampling stage of the imaging process, such as optical blur and detector shape. The postsampling MTF includes all the effects after sampling, such as interpolation filters and display characteristics. Simulation and laboratory measurements are used to assess the utility of these techniques. Knowledge of these components and inclusion into sensor models, such as the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate's NVThermIP, will allow more accurate modeling and complete characterization of sensor performance.

  14. Search times and probability of detection in time-limited search

    NASA Astrophysics Data System (ADS)

    Wilson, David; Devitt, Nicole; Maurer, Tana

    2005-05-01

    When modeling the search and target acquisition process, probability of detection as a function of time is important to war games and physical entity simulations. Recent US Army RDECOM CERDEC Night Vision and Electronics Sensor Directorate modeling of search and detection has focused on time-limited search. Developing the relationship between detection probability and time of search as a differential equation is explored. One of the parameters in the current formula for probability of detection in time-limited search corresponds to the mean time to detect in time-unlimited search. However, the mean time to detect in time-limited search is shorter than the mean time to detect in time-unlimited search and the relationship between them is a mathematical relationship between these two mean times. This simple relationship is derived.

  15. VISION: A Model of Culture for Counselors.

    ERIC Educational Resources Information Center

    Baber, W. Lorenzo; Garrett, Michael T.; Holcomb-McCoy, Cheryl

    1997-01-01

    Culture as a group phenomenon versus the need of counselors to work with the individual is addressed. The VISION model of culture, which accounts for within-group and between-group differences, the disappearance of groups, and the emergence of new ones, is presented. Two examples of multicultural interventions are reported. (Author/EMK)

  16. Efficient spiking neural network model of pattern motion selectivity in visual cortex.

    PubMed

    Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L

    2014-07-01

    Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.

  17. Creating a Realistic IT Vision: The Roles and Responsibilities of a Chief Information Officer.

    ERIC Educational Resources Information Center

    Penrod, James I.

    2003-01-01

    Discusses the crucial position of the chief information officer (CIO) at higher education institutions and reviews the six major stages of information technology (IT) planning. Includes fundamental elements related to an IT vision; roles of the CIO; the six-stage planning model for a realistic IT vision; and factors for success. (AEF)

  18. Bayesian Modeling of Perceived Surface Slant from Actively-Generated and Passively-Observed Optic Flow

    PubMed Central

    Caudek, Corrado; Fantoni, Carlo; Domini, Fulvio

    2011-01-01

    We measured perceived depth from the optic flow (a) when showing a stationary physical or virtual object to observers who moved their head at a normal or slower speed, and (b) when simulating the same optic flow on a computer and presenting it to stationary observers. Our results show that perceived surface slant is systematically distorted, for both the active and the passive viewing of physical or virtual surfaces. These distortions are modulated by head translation speed, with perceived slant increasing directly with the local velocity gradient of the optic flow. This empirical result allows us to determine the relative merits of two alternative approaches aimed at explaining perceived surface slant in active vision: an “inverse optics” model that takes head motion information into account, and a probabilistic model that ignores extra-retinal signals. We compare these two approaches within the framework of the Bayesian theory. The “inverse optics” Bayesian model produces veridical slant estimates if the optic flow and the head translation velocity are measured with no error; because of the influence of a “prior” for flatness, the slant estimates become systematically biased as the measurement errors increase. The Bayesian model, which ignores the observer's motion, always produces distorted estimates of surface slant. Interestingly, the predictions of this second model, not those of the first one, are consistent with our empirical findings. The present results suggest that (a) in active vision perceived surface slant may be the product of probabilistic processes which do not guarantee the correct solution, and (b) extra-retinal signals may be mainly used for a better measurement of retinal information. PMID:21533197

  19. The Efficacy of Using Synthetic Vision Terrain-Textured Images to Improve Pilot Situation Awareness

    NASA Technical Reports Server (NTRS)

    Uenking, Michael D.; Hughes, Monica F.

    2002-01-01

    The General Aviation Element of the Aviation Safety Program's Synthetic Vision Systems (SVS) Project is developing technology to eliminate low visibility induced General Aviation (GA) accidents. SVS displays present computer generated 3-dimensional imagery of the surrounding terrain on the Primary Flight Display (PFD) to greatly enhance pilot's situation awareness (SA), reducing or eliminating Controlled Flight into Terrain, as well as Low-Visibility Loss of Control accidents. SVS-conducted research is facilitating development of display concepts that provide the pilot with an unobstructed view of the outside terrain, regardless of weather conditions and time of day. A critical component of SVS displays is the appropriate presentation of terrain to the pilot. An experimental study is being conducted at NASA Langley Research Center (LaRC) to explore and quantify the relationship between the realism of the terrain presentation and resulting enhancements of pilot SA and performance. Composed of complementary simulation and flight test efforts, Terrain Portrayal for Head-Down Displays (TP-HDD) experiments will help researchers evaluate critical terrain portrayal concepts. The experimental effort is to provide data to enable design trades that optimize SVS applications, as well as develop requirements and recommendations to facilitate the certification process. In this part of the experiment a fixed based flight simulator was equipped with various types of Head Down flight displays, ranging from conventional round dials (typical of most GA aircraft) to glass cockpit style PFD's. The variations of the PFD included an assortment of texturing and Digital Elevation Model (DEM) resolution combinations. A test matrix of 10 terrain display configurations (in addition to the baseline displays) were evaluated by 27 pilots of various backgrounds and experience levels. Qualitative (questionnaires) and quantitative (pilot performance and physiological) data were collected during the experimental runs. This paper focuses on the experimental set-up and final physiological results of the TP-HDD simulation experiment. The physiological measures of skin temperature, heart rate, and muscle response, show a decreased engagement (while using the synthetic vision displays as compared to the baseline conventional display) of the sympathetic and somatic nervous system responses which, in turn, indicates a reduced level of mental workload. This decreased level of workload is expected to enable improvement in the pilot's situation and terrain awareness.

  20. Real-time simulation of combined short-wave and long-wave infrared vision on a head-up display

    NASA Astrophysics Data System (ADS)

    Peinecke, Niklas; Schmerwitz, Sven

    2014-05-01

    Landing under adverse weather conditions can be challenging, even if the airfields are well known to the pilots. This is true for civil as well as military aviation. Within the scope of this paper we concentrate especially on fog conditions. The work has been conducted within the project ALICIA. ALICIA is a research and development project co-funded by European Commission under the Seventh Framework Programme. ALICIA aims at developing new and scalable cockpit applications which can extend operations of aircraft in degraded conditions: All Conditions Operations. One of the systems developed is a head-up display that can display a generated symbology together with a raster-mode infrared image. We will detail how we implemented a real-time enabled simulation of a combined short-wave and long-wave infrared image for landing. A major challenge was to integrate several already existing simulation solutions, e.g., for visual simulation and sensors with the required data-bases. For the simulations DLRs in-house sensor simulation framework F3S was used, together with a commercially available airport model that had to be heavily modified in order to provide realistic infrared data. Special effort was invested for a realistic impression of runway lighting under foggy conditions. We will present results and sketch further improvements for future simulations.

  1. The theoretical simulation on electrostatic distribution of 1st proximity region in proximity focusing low-light-level image intensifier

    NASA Astrophysics Data System (ADS)

    Zhang, Liandong; Bai, Xiaofeng; Song, De; Fu, Shencheng; Li, Ye; Duanmu, Qingduo

    2015-03-01

    Low-light-level night vision technology is magnifying low light level signal large enough to be seen by naked eye, which uses the photons - photoelectron as information carrier. Until the micro-channel plate was invented, it has been possibility for the realization of high performance and miniaturization of low-light-level night vision device. The device is double-proximity focusing low-light-level image intensifier which places a micro-channel plate close to photocathode and phosphor screen. The advantages of proximity focusing low-light-level night vision are small size, light weight, small power consumption, no distortion, fast response speed, wide dynamic range and so on. It is placed parallel to each other for Micro-channel plate (both sides of it with metal electrode), the photocathode and the phosphor screen are placed parallel to each other. The voltage is applied between photocathode and the input of micro-channel plate when image intensifier works. The emission electron excited by photo on the photocathode move towards to micro-channel plate under the electric field in 1st proximity focusing region, and then it is multiplied through the micro-channel. The movement locus of emission electrons can be calculated and simulated when the distributions of electrostatic field equipotential lines are determined in the 1st proximity focusing region. Furthermore the resolution of image tube can be determined. However the distributions of electrostatic fields and equipotential lines are complex due to a lot of micro-channel existing in the micro channel plate. This paper simulates electrostatic distribution of 1st proximity region in double-proximity focusing low-light-level image intensifier with the finite element simulation analysis software Ansoft maxwell 3D. The electrostatic field distributions of 1st proximity region are compared when the micro-channel plates' pore size, spacing and inclination angle ranged. We believe that the electron beam movement trajectory in 1st proximity region will be better simulated when the electronic electrostatic fields are simulated.

  2. Why an Eye Limiting Display Resolution Matters

    NASA Technical Reports Server (NTRS)

    Kato, Kenji Hiroshi

    2013-01-01

    Many factors affect the suitability of an out-the-window simulator visual system. Contrast, brightness, resolution, field-of-view, update rate, scene content and a number of other criteria are common factors often used to define requirements for simulator visual systems. For the past 7 years, NASA has worked with the USAF on the Operational Based Vision Assessment Program. The purpose of this program has been to provide the USAF School of Aerospace Medicine with a scientific testing laboratory to study human vision and testing standards in an operationally relevant environment. It was determined early in the design that current commercial and military training systems wern't well suited for the available budget as well as the highly research oriented requirements. During various design review meetings, it was determined the OBVA requirements were best met by using commercial-off-the-shelf equipment to minimize technical risk and costs. In this paper we will describe how the simulator specifications were developed in order to meet the research objectives and the resulting architecture and design considerations. In particular we will discuss the image generator architecture and database developments to meet eye limited resolution.

  3. Global vision system in laparoscopy.

    PubMed

    Rivas-Blanco, I; Sánchez-de-Badajoz, E; García-Morales, I; Lage-Sánchez, J M; Sánchez-Gallegos, P; Pérez-Del-Pulgar, C J; Muñoz, V F

    2017-05-01

    The main difficulty in laparoscopic or robot-assisted surgery is the narrow visual field, restricted by the endoscope's access port. This restriction is coupled with the difficulty of handling the instruments, which is due not only to the access port but also to the loss of depth of field and perspective due to the lack of natural lighting. In this article, we describe a global vision system and report on our initial experience in a porcine model. The global vision system consists of a series of intraabdominal devices, which increase the visual field and help recover perspective through the simulation of natural shadows. These devices are a series of high-definition cameras and LED lights, which are inserted and fixed to the wall using magnets. The system's efficacy was assessed in a varicocelectomy and nephrectomy. The various intraabdominal cameras offer a greater number of intuitive points of view of the surgical field compared with the conventional telescope and appear to provide a similar view as that in open surgery. Areas previously inaccessible to the standard telescope can now be reached. The additional light sources create shadows that increase the perspective of the surgical field. This system appears to increase the possibilities for laparoscopic or robot-assisted surgery because it offers an instant view of almost the entire abdomen, enabling more complex procedures, which currently require an open pathway. Copyright © 2016 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  4. Design and implementation of a remote UAV-based mobile health monitoring system

    NASA Astrophysics Data System (ADS)

    Li, Songwei; Wan, Yan; Fu, Shengli; Liu, Mushuang; Wu, H. Felix

    2017-04-01

    Unmanned aerial vehicles (UAVs) play increasing roles in structure health monitoring. With growing mobility in modern Internet-of-Things (IoT) applications, the health monitoring of mobile structures becomes an emerging application. In this paper, we develop a UAV-carried vision-based monitoring system that allows a UAV to continuously track and monitor a mobile infrastructure and transmit back the monitoring information in real- time from a remote location. The monitoring system uses a simple UAV-mounted camera and requires only a single feature located on the mobile infrastructure for target detection and tracking. The computation-effective vision-based tracking solution based on a single feature is an improvement over existing vision-based lead-follower tracking systems that either have poor tracking performance due to the use of a single feature, or have improved tracking performance at a cost of the usage of multiple features. In addition, a UAV-carried aerial networking infrastructure using directional antennas is used to enable robust real-time transmission of monitoring video streams over a long distance. Automatic heading control is used to self-align headings of directional antennas to enable robust communication in mobility. Compared to existing omni-communication systems, the directional communication solution significantly increases the operation range of remote monitoring systems. In this paper, we develop the integrated modeling framework of camera and mobile platforms, design the tracking algorithm, develop a testbed of UAVs and mobile platforms, and evaluate system performance through both simulation studies and field tests.

  5. The eyes and vision of butterflies.

    PubMed

    Arikawa, Kentaro

    2017-08-15

    Butterflies use colour vision when searching for flowers. Unlike the trichromatic retinas of humans (blue, green and red cones; plus rods) and honeybees (ultraviolet, blue and green photoreceptors), butterfly retinas typically have six or more photoreceptor classes with distinct spectral sensitivities. The eyes of the Japanese yellow swallowtail (Papilio xuthus) contain ultraviolet, violet, blue, green, red and broad-band receptors, with each ommatidium housing nine photoreceptor cells in one of three fixed combinations. The Papilio eye is thus a random patchwork of three types of spectrally heterogeneous ommatidia. To determine whether Papilio use all of their receptors to see colours, we measured their ability to discriminate monochromatic lights of slightly different wavelengths. We found that Papilio can detect differences as small as 1-2 nm in three wavelength regions, rivalling human performance. We then used mathematical modelling to infer which photoreceptors are involved in wavelength discrimination. Our simulation indicated that the Papilio vision is tetrachromatic, employing the ultraviolet, blue, green and red receptors. The random array of three ommatidial types is a common feature in butterflies. To address the question of how the spectrally complex eyes of butterflies evolved, we studied their developmental process. We have found that the development of butterfly eyes shares its molecular logic with that of Drosophila: the three-way stochastic expression pattern of the transcription factor Spineless determines the fate of ommatidia, creating the random array in Papilio. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.

  6. Blindness and low vision in The Netherlands from 2000 to 2020-modeling as a tool for focused intervention.

    PubMed

    Limburg, Hans; Keunen, Jan E E

    2009-01-01

    To estimate the magnitude and causes of blindness and low vision in The Netherlands from 2000 to 2020. Recent population-based blindness surveys in established market economies were reviewed. Age and gender specific prevalence and causes of blindness and low vision were extracted and calculated for six population subgroups in The Netherlands. A mathematical model was developed to relate the epidemiologic data with demographic data for each subgroup for each year between 2000 and 2020. In 2008 an estimated 311,000 people are visually impaired in The Netherlands: 77,000 are blind and 234,000 have low vision. With the current intervention the number may increase by 18% to 367,000 in 2020. Visual impairment is most prevalent among residents of nursing homes and care institutions for the elderly, intellectually disabled persons and people aged 50+ living independently. Of all people with visual impairment 31% is male (97,000) and 69% female (214,000). More than half of all visual impairment (56%; 174,000 persons) is avoidable. A variation of around 20% might be applied to the numbers in these estimates. The aim of VISION 2020: The Right to Sight to reduce avoidable visual impairment is also relevant for developed countries like The Netherlands. Vision screening and awareness campaigns focusing on the identified risk groups can reduce avoidable blindness considerably. Regular updates of the model will ensure that the prognoses remain valid and relevant. With appropriate demographic data, the model can also be used in other established market economies.

  7. Color vision impairment in multiple sclerosis points to retinal ganglion cell damage.

    PubMed

    Lampert, E J; Andorra, M; Torres-Torres, R; Ortiz-Pérez, S; Llufriu, S; Sepúlveda, M; Sola, N; Saiz, A; Sánchez-Dalmau, B; Villoslada, P; Martínez-Lapiscina, Elena H

    2015-11-01

    Multiple Sclerosis (MS) results in color vision impairment regardless of optic neuritis (ON). The exact location of injury remains undefined. The objective of this study is to identify the region leading to dyschromatopsia in MS patients' NON-eyes. We evaluated Spearman correlations between color vision and measures of different regions in the afferent visual pathway in 106 MS patients. Regions with significant correlations were included in logistic regression models to assess their independent role in dyschromatopsia. We evaluated color vision with Hardy-Rand-Rittler plates and retinal damage using Optical Coherence Tomography. We ran SIENAX to measure Normalized Brain Parenchymal Volume (NBPV), FIRST for thalamus volume and Freesurfer for visual cortex areas. We found moderate, significant correlations between color vision and macular retinal nerve fiber layer (rho = 0.289, p = 0.003), ganglion cell complex (GCC = GCIP) (rho = 0.353, p < 0.001), thalamus (rho = 0.361, p < 0.001), and lesion volume within the optic radiations (rho = -0.230, p = 0.030). Only GCC thickness remained significant (p = 0.023) in the logistic regression model. In the final model including lesion load and NBPV as markers of diffuse neuroaxonal damage, GCC remained associated with dyschromatopsia [OR = 0.88 95 % CI (0.80-0.97) p = 0.016]. This association remained significant when we also added sex, age, and disease duration as covariates in the regression model. Dyschromatopsia in NON-eyes is due to damage of retinal ganglion cells (RGC) in MS. Color vision can serve as a marker of RGC damage in MS.

  8. Bringing UAVs to the fight: recent army autonomy research and a vision for the future

    NASA Astrophysics Data System (ADS)

    Moorthy, Jay; Higgins, Raymond; Arthur, Keith

    2008-04-01

    The Unmanned Autonomous Collaborative Operations (UACO) program was initiated in recognition of the high operational burden associated with utilizing unmanned systems by both mounted and dismounted, ground and airborne warfighters. The program was previously introduced at the 62nd Annual Forum of the American Helicopter Society in May of 20061. This paper presents the three technical approaches taken and results obtained in UACO. All three approaches were validated extensively in contractor simulations, two were validated in government simulation, one was flight tested outside the UACO program, and one was flight tested in Part 2 of UACO. Results and recommendations are discussed regarding diverse areas such as user training and human-machine interface, workload distribution, UAV flight safety, data link bandwidth, user interface constructs, adaptive algorithms, air vehicle system integration, and target recognition. Finally, a vision for UAV As A Wingman is presented.

  9. [A research of letter color visibility in package insert information using simulator].

    PubMed

    Kamimura, Naoki; Kinoshita, Noriyuki; Onaga, Midori; Watanabe, Yurika; Ijuin, Kazushige; Shikamura, Yoshiaki; Negishi, Kenichi; Kaiho, Fusao; Ohta, Takafumi

    2012-01-01

    Package insert of pharmaceutical drug is one of the most prioritized information for pharmacists to secure safety of patients. However, the color of character, size, font and so on are various company by company product to product from a viewpoint of visibility. It may be cause a serious accident in case visibility is unclear, although it is the most important information. Moreover, package insert with high visibility is required for color vision defectives from a viewpoint of a universal design. Then, the authors selected the package insert which has the boxed warning in the ethical pharmaceutical currently stored mostly in the present health insurance pharmacy and quantified the red color using the color meter. We advocate the state of a suitable package insert from a viewpoint of a universal design, whether the red color is high visible or not for color vision defectives using simulator.

  10. Design of a reading test for low-vision image warping

    NASA Astrophysics Data System (ADS)

    Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. Shane

    1993-08-01

    NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision -- maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer- generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.

  11. Design of a reading test for low vision image warping

    NASA Technical Reports Server (NTRS)

    Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. S.

    1993-01-01

    NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision - maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer-generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We will describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.

  12. Reduced vision in highly myopic eyes without ocular pathology: the ZOC-BHVI high myopia study.

    PubMed

    Jong, Monica; Sankaridurg, Padmaja; Li, Wayne; Resnikoff, Serge; Naidoo, Kovin; He, Mingguang

    2018-01-01

    The aim was to investigate the relationship of the magnitude of myopia with visual acuity in highly myopic eyes without ocular pathology. Twelve hundred and ninety-two highly myopic eyes (up to -6.00 DS both eyes, no astigmatic cut-off) with no ocular pathology from the ZOC-BHVI high myopia study in China, had cycloplegic refraction, followed by subjective refraction and visual acuities and axial length measurement. Two logistic regression models were undertaken to test the association of age, gender, refractive error, axial length and parental myopia with reduced vision. Mean group age was 19.0 ± 8.6 years; subjective spherical equivalent refractive error was -9.03 ± 2.73 D; objective spherical equivalent refractive error was -8.90 ± 2.60 D and axial length was 27.0 ± 1.3 mm. Using visual acuity, 82.4 per cent had normal vision, 16.0 per cent had mildly reduced vision, 1.2 per cent had moderately reduced vision, 0.3 per cent had severely reduced vision and no subjects were blind. The percentage with reduced vision increased with spherical equivalent to 74.5 per cent from -15.00 to -39.99 D, axial length to 67.7 per cent of eyes from 30.01 to 32.00 mm and age to 22.9 per cent of those 41 years and over. Spherical equivalent and axial length were significantly associated with reduced vision (p < 0.0001). Age and parental myopia were not significantly associated with reduced vision. Gender was significant for one model (p = 0.04). Mildly reduced vision is common in high myopia without ocular pathology and is strongly correlated with greater magnitudes of refractive error and axial length. Better understanding is required to minimise reduced vision in high myopes. © 2017 Optometry Australia.

  13. Physiological modeling for detecting degree of perception of a color-deficient person.

    PubMed

    Rajalakshmi, T; Prince, Shanthi

    2017-04-01

    Physiological modeling of retina plays a vital role in the development of high-performance image processing methods to produce better visual perception. People with normal vision have an ability to discern different colors. The situation is different in the case of people with color blindness. The aim of this work is to develop a human visual system model for detecting the level of perception of people with red, green and blue deficiency by considering properties like luminance, spatial and temporal frequencies. Simulation results show that in the photoreceptor, outer plexiform and inner plexiform layers, the energy and intensity level of the red, green and blue component for a normal person is proved to be significantly higher than for dichromats. The proposed method explains with appropriate results that red and blue color blindness people could not perceive red and blue color completely.

  14. Improving the Flight Path Marker Symbol on Rotorcraft Synthetic Vision Displays

    NASA Technical Reports Server (NTRS)

    Szoboszlay, Zoltan P.; Hardy, Gordon H.; Welsh, Terence M.

    2004-01-01

    Two potential improvements to the flight path marker symbol were evaluated on a panel-mounted, synthetic vision, primary flight display in a rotorcraft simulation. One concept took advantage of the fact that synthetic vision systems have terrain height information available ahead of the aircraft. For this first concept, predicted altitude and ground track information was added to the flight path marker. In the second concept, multiple copies of the flight path marker were displayed at 3, 4, and 5 second prediction times as compared to a single prediction time of 3 seconds. Objective and subjective data were collected for eight rotorcraft pilots. The first concept produced significant improvements in pilot attitude control, ground track control, workload ratings, and preference ratings. The second concept did not produce significant differences in the objective or subjective measures.

  15. Reading with peripheral vision: a comparison of reading dynamic scrolling and static text with a simulated central scotoma.

    PubMed

    Harvey, Hannah; Walker, Robin

    2014-05-01

    Horizontally scrolling text is, in theory, ideally suited to enhance viewing strategies recommended to improve reading performance under conditions of central vision loss such as macular disease, although it is largely unproven in this regard. This study investigated if the use of scrolling text produced an observable improvement in reading performed under conditions of eccentric viewing in an artificial scotoma paradigm. Participants (n=17) read scrolling and static text with a central artificial scotoma controlled by an eye-tracker. There was an improvement in measures of reading accuracy, and adherence to eccentric viewing strategies with scrolling, compared to static, text. These findings illustrate the potential benefits of scrolling text as a potential reading aid for those with central vision loss. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Design of verification platform for wireless vision sensor networks

    NASA Astrophysics Data System (ADS)

    Ye, Juanjuan; Shang, Fei; Yu, Chuang

    2017-08-01

    At present, the majority of research for wireless vision sensor networks (WVSNs) still remains in the software simulation stage, and the verification platforms of WVSNs that available for use are very few. This situation seriously restricts the transformation from theory research of WVSNs to practical application. Therefore, it is necessary to study the construction of verification platform of WVSNs. This paper combines wireless transceiver module, visual information acquisition module and power acquisition module, designs a high-performance wireless vision sensor node whose core is ARM11 microprocessor and selects AODV as the routing protocol to set up a verification platform called AdvanWorks for WVSNs. Experiments show that the AdvanWorks can successfully achieve functions of image acquisition, coding, wireless transmission, and obtain the effective distance parameters between nodes, which lays a good foundation for the follow-up application of WVSNs.

  17. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  18. Progress in computer vision.

    NASA Astrophysics Data System (ADS)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  19. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  20. Predictive modeling of infrared detectors and material systems

    NASA Astrophysics Data System (ADS)

    Pinkie, Benjamin

    Detectors sensitive to thermal and reflected infrared radiation are widely used for night-vision, communications, thermography, and object tracking among other military, industrial, and commercial applications. System requirements for the next generation of ultra-high-performance infrared detectors call for increased functionality such as large formats (> 4K HD) with wide field-of-view, multispectral sensitivity, and on-chip processing. Due to the low yield of infrared material processing, the development of these next-generation technologies has become prohibitively costly and time consuming. In this work, it will be shown that physics-based numerical models can be applied to predictively simulate infrared detector arrays of current technological interest. The models can be used to a priori estimate detector characteristics, intelligently design detector architectures, and assist in the analysis and interpretation of existing systems. This dissertation develops a multi-scale simulation model which evaluates the physics of infrared systems from the atomic (material properties and electronic structure) to systems level (modulation transfer function, dense array effects). The framework is used to determine the electronic structure of several infrared materials, optimize the design of a two-color back-to-back HgCdTe photodiode, investigate a predicted failure mechanism for next-generation arrays, and predict the systems-level measurables of a number of detector architectures.

  1. Holodeck: Telepresence Dome Visualization System Simulations

    NASA Technical Reports Server (NTRS)

    Hite, Nicolas

    2012-01-01

    This paper explores the simulation and consideration of different image-projection strategies for the Holodeck, a dome that will be used for highly immersive telepresence operations in future endeavors of the National Aeronautics and Space Administration (NASA). Its visualization system will include a full 360 degree projection onto the dome's interior walls in order to display video streams from both simulations and recorded video. Because humans innately trust their vision to precisely report their surroundings, the Holodeck's visualization system is crucial to its realism. This system will be rigged with an integrated hardware and software infrastructure-namely, a system of projectors that will relay with a Graphics Processing Unit (GPU) and computer to both project images onto the dome and correct warping in those projections in real-time. Using both Computer-Aided Design (CAD) and ray-tracing software, virtual models of various dome/projector geometries were created and simulated via tracking and analysis of virtual light sources, leading to the selection of two possible configurations for installation. Research into image warping and the generation of dome-ready video content was also conducted, including generation of fisheye images, distortion correction, and the generation of a reliable content-generation pipeline.

  2. Fuzzy Petri nets to model vision system decisions within a flexible manufacturing system

    NASA Astrophysics Data System (ADS)

    Hanna, Moheb M.; Buck, A. A.; Smith, R.

    1994-10-01

    The paper presents a Petri net approach to modelling, monitoring and control of the behavior of an FMS cell. The FMS cell described comprises a pick and place robot, vision system, CNC-milling machine and 3 conveyors. The work illustrates how the block diagrams in a hierarchical structure can be used to describe events at different levels of abstraction. It focuses on Fuzzy Petri nets (Fuzzy logic with Petri nets) including an artificial neural network (Fuzzy Neural Petri nets) to model and control vision system decisions and robot sequences within an FMS cell. This methodology can be used as a graphical modelling tool to monitor and control the imprecise, vague and uncertain situations, and determine the quality of the output product of an FMS cell.

  3. Trauma-Informed Part C Early Intervention: A Vision, A Challenge, A New Reality

    ERIC Educational Resources Information Center

    Gilkerson, Linda; Graham, Mimi; Harris, Deborah; Oser, Cindy; Clarke, Jane; Hairston-Fuller, Tody C.; Lertora, Jessica

    2013-01-01

    Federal directives require that any child less than 3 years old with a substantiated case of abuse be referred to the early intervention (EI) system. This article details the need and presents a vision for a trauma-informed EI system. The authors describe two exemplary program models which implement this vision and recommend steps which the field…

  4. Piloted Simulation of Various Synthetic Vision Systems Terrain Portrayal and Guidance Symbology Concepts for Low Altitude En-Route Scenario

    NASA Technical Reports Server (NTRS)

    Takallu, M. A.; Glaab, L. J.; Hughes, M. F.; Wong, D. T.; Bartolone, A. P.

    2008-01-01

    In support of the NASA Aviation Safety Program's Synthetic Vision Systems Project, a series of piloted simulations were conducted to explore and quantify the relationship between candidate Terrain Portrayal Concepts and Guidance Symbology Concepts, specific to General Aviation. The experiment scenario was based on a low altitude en route flight in Instrument Metrological Conditions in the central mountains of Alaska. A total of 18 general aviation pilots, with three levels of pilot experience, evaluated a test matrix of four terrain portrayal concepts and six guidance symbology concepts. Quantitative measures included various pilot/aircraft performance data, flight technical errors and flight control inputs. The qualitative measures included pilot comments and pilot responses to the structured questionnaires such as perceived workload, subjective situation awareness, pilot preferences, and the rare event recognition. There were statistically significant effects found from guidance symbology concepts and terrain portrayal concepts but no significant interactions between them. Lower flight technical errors and increased situation awareness were achieved using Synthetic Vision Systems displays, as compared to the baseline Pitch/Roll Flight Director and Blue Sky Brown Ground combination. Overall, those guidance symbology concepts that have both path based guidance cue and tunnel display performed better than the other guidance concepts.

  5. Recognition of simulated cyanosis by color-vision-normal and color-vision-deficient subjects.

    PubMed

    Dain, Stephen J

    2014-04-01

    There are anecdotal reports that the recognition of cyanosis is difficult for some color-deficient observers. The chromaticity changes of blood with oxygenation in vitro lie close to the dichromatic confusion lines. The chromaticity changes of lips and nail beds measured in vivo are also generally aligned in the same way. Experiments involving visual assessment of cyanosis in vivo are fraught with technical and ethical difficulties A single lower face image of a healthy individual was digitally altered to produce levels of simulated cyanosis. The color change is essentially one of saturation. Some images with other color changes were also included to ensure that there was no propensity to identify those as cyanosed. The images were assessed for reality by a panel of four instructors from the NSW Ambulance Service training section. The images were displayed singly and the observer was required to identify if the person was cyanosed or not. Color normal subjects comprised 32 experienced ambulance officers and 27 new recruits. Twenty-seven color deficient subjects (non-NSW Ambulance Service) were examined. The recruits were less accurate and slower at identifying the cyanosed images and the color vision deficient were less accurate and slower still. The identification of cyanosis is a skill that improves with training and is adversely affected in color deficient observers.

  6. Vision-based control for flight relative to dynamic environments

    NASA Astrophysics Data System (ADS)

    Causey, Ryan Scott

    The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.

  7. The Ohio Contrast Cards: Visual Performance in a Pediatric Low-vision Site

    PubMed Central

    Hopkins, Gregory R.; Dougherty, Bradley E.; Brown, Angela M.

    2017-01-01

    SIGNIFICANCE This report describes the first clinical use of the Ohio Contrast Cards, a new test that measures the maximum spatial contrast sensitivity of low-vision patients who cannot recognize and identify optotypes and for whom the spatial frequency of maximum contrast sensitivity is unknown. PURPOSE To compare measurements of the Ohio Contrast Cards to measurements of three other vision tests and a vision-related quality-of-life questionnaire obtained on partially sighted students at Ohio State School for the Blind. METHODS The Ohio Contrast Cards show printed square-wave gratings at very low spatial frequency (0.15 cycle/degree). The patient looks to the left/right side of the card containing the grating. Twenty-five students (13 to 20 years old) provided four measures of visual performance: two grating card tests (the Ohio Contrast Cards and the Teller Acuity Cards) and two letter charts (the Pelli-Robson contrast chart and the Bailey-Lovie acuity chart). Spatial contrast sensitivity functions were modeled using constraints from the grating data. The Impact of Vision Impairment on Children questionnaire measured vision-related quality of life. RESULTS Ohio Contrast Card contrast sensitivity was always less than 0.19 log10 units below the maximum possible contrast sensitivity predicted by the model; average Pelli-Robson letter contrast sensitivity was near the model prediction, but 0.516 log10 units below the maximum. Letter acuity was 0.336 logMAR below the grating acuity results. The model estimated the best testing distance in meters for optimum Pelli-Robson contrast sensitivity from the Bailey-Lovie acuity as distance = 1.5 − logMAR for low-vision patients. Of the four vision tests, only Ohio Contrast Card contrast sensitivity was independently and statistically significantly correlated with students' quality of life. CONCLUSIONS The Ohio Contrast Cards combine a grating stimulus, a looking indicator behavior, and contrast sensitivity measurement. They show promise for the clinical objective of advising the patient and his/her caregivers about the success the patient is likely to enjoy in tasks of everyday life. PMID:28972542

  8. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays.

    PubMed

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-11-30

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array.

  9. The Effectiveness of Simulator Motion in the Transfer of Performance on a Tracking Task Is Influenced by Vision and Motion Disturbance Cues.

    PubMed

    Grundy, John G; Nazar, Stefan; O'Malley, Shannon; Mohrenshildt, Martin V; Shedden, Judith M

    2016-06-01

    To examine the importance of platform motion to the transfer of performance in motion simulators. The importance of platform motion in simulators for pilot training is strongly debated. We hypothesized that the type of motion (e.g., disturbance) contributes significantly to performance differences. Participants used a joystick to perform a target tracking task in a pod on top of a MOOG Stewart motion platform. Five conditions compared training without motion, with correlated motion, with disturbance motion, with disturbance motion isolated to the visual display, and with both correlated and disturbance motion. The test condition involved the full motion model with both correlated and disturbance motion. We analyzed speed and accuracy across training and test as well as strategic differences in joystick control. Training with disturbance cues produced critical behavioral differences compared to training without disturbance; motion itself was less important. Incorporation of disturbance cues is a potentially important source of variance between studies that do or do not show a benefit of motion platforms in the transfer of performance in simulators. Potential applications of this research include the assessment of the importance of motion platforms in flight simulators, with a focus on the efficacy of incorporating disturbance cues during training. © 2016, Human Factors and Ergonomics Society.

  10. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  11. Drone-Augmented Human Vision: Exocentric Control for Drones Exploring Hidden Areas.

    PubMed

    Erat, Okan; Isop, Werner Alexander; Kalkofen, Denis; Schmalstieg, Dieter

    2018-04-01

    Drones allow exploring dangerous or impassable areas safely from a distant point of view. However, flight control from an egocentric view in narrow or constrained environments can be challenging. Arguably, an exocentric view would afford a better overview and, thus, more intuitive flight control of the drone. Unfortunately, such an exocentric view is unavailable when exploring indoor environments. This paper investigates the potential of drone-augmented human vision, i.e., of exploring the environment and controlling the drone indirectly from an exocentric viewpoint. If used with a see-through display, this approach can simulate X-ray vision to provide a natural view into an otherwise occluded environment. The user's view is synthesized from a three-dimensional reconstruction of the indoor environment using image-based rendering. This user interface is designed to reduce the cognitive load of the drone's flight control. The user can concentrate on the exploration of the inaccessible space, while flight control is largely delegated to the drone's autopilot system. We assess our system with a first experiment showing how drone-augmented human vision supports spatial understanding and improves natural interaction with the drone.

  12. Vision - Vision 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Brian E.; Oppel III, Fred J.

    2017-01-25

    This package contains modules that model a visual sensor in Umbra. It is typically used to represent eyesight of characters in Umbra. This library also includes the sensor property, seeable, and an Active Denial sensor.

  13. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  14. Rotary acceleration of a subject inhibits choice reaction time to motion in peripheral vision

    NASA Technical Reports Server (NTRS)

    Borkenhagen, J. M.

    1974-01-01

    Twelve pilots were tested in a rotation device with visual simulation, alone and in combination with rotary stimulation, in experiments with variable levels of acceleration and variable viewing angles, in a study of the effect of S's rotary acceleration on the choice reaction time for an accelerating target in peripheral vision. The pilots responded to the direction of the visual motion by moving a hand controller to the right or left. Visual-plus-rotary stimulation required a longer choice reaction time, which was inversely related to the level of acceleration and directly proportional to the viewing angle.

  15. Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The symposium Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace was held at the NASA Lewis Research Center on March 30-31, 1993. The purpose of the symposium was to simulate interdisciplinary thinking in the sciences and technologies which will be required for exploration and development of space over the next thousand years. The keynote speakers were Hans Moravec, Vernor Vinge, Carol Stoker, and Myron Krueger. The proceedings consist of transcripts of the invited talks and the panel discussion by the invited speakers, summaries of workshop sessions, and contributed papers by the attendees.

  16. The impact on midlevel vision of statistically optimal divisive normalization in V1

    PubMed Central

    Coen-Cagli, Ruben; Schwartz, Odelia

    2013-01-01

    The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality. PMID:23857950

  17. Bioplausible multiscale filtering in retino-cortical processing as a mechanism in perceptual grouping.

    PubMed

    Nematzadeh, Nasim; Powers, David M W; Lewis, Trent W

    2017-12-01

    Why does our visual system fail to reconstruct reality, when we look at certain patterns? Where do Geometrical illusions start to emerge in the visual pathway? How far should we take computational models of vision with the same visual ability to detect illusions as we do? This study addresses these questions, by focusing on a specific underlying neural mechanism involved in our visual experiences that affects our final perception. Among many types of visual illusion, 'Geometrical' and, in particular, 'Tilt Illusions' are rather important, being characterized by misperception of geometric patterns involving lines and tiles in combination with contrasting orientation, size or position. Over the last decade, many new neurophysiological experiments have led to new insights as to how, when and where retinal processing takes place, and the encoding nature of the retinal representation that is sent to the cortex for further processing. Based on these neurobiological discoveries, we provide computer simulation evidence from modelling retinal ganglion cells responses to some complex Tilt Illusions, suggesting that the emergence of tilt in these illusions is partially related to the interaction of multiscale visual processing performed in the retina. The output of our low-level filtering model is presented for several types of Tilt Illusion, predicting that the final tilt percept arises from multiple-scale processing of the Differences of Gaussians and the perceptual interaction of foreground and background elements. The model is a variation of classical receptive field implementation for simple cells in early stages of vision with the scales tuned to the object/texture sizes in the pattern. Our results suggest that this model has a high potential in revealing the underlying mechanism connecting low-level filtering approaches to mid- and high-level explanations such as 'Anchoring theory' and 'Perceptual grouping'.

  18. Vision Integrating Strategies in Ophthalmology and Neurochemistry (VISION)

    DTIC Science & Technology

    2011-02-01

    in the above figure. We have already tested this virus in P23H Rhodopsin rat model of retinitis pigmentosa and found that it has a therapeutic...We have established three different mouse models of ocular injury with different injury-initiating mechanisms (i.e. optic nerve crush, retinal ...functionally and structurally rescue photoreceptor cells in rodent models of retinal degeneration. She brings expertise in gene therapy and in cellular

  19. Retinex at 50: color theory and spatial algorithms, a review

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2017-05-01

    Retinex Imaging shares two distinct elements: first, a model of human color vision; second, a spatial-imaging algorithm for making better reproductions. Edwin Land's 1964 Retinex Color Theory began as a model of human color vision of real complex scenes. He designed many experiments, such as Color Mondrians, to understand why retinal cone quanta catch fails to predict color constancy. Land's Retinex model used three spatial channels (L, M, S) that calculated three independent sets of monochromatic lightnesses. Land and McCann's lightness model used spatial comparisons followed by spatial integration across the scene. The parameters of their model were derived from extensive observer data. This work was the beginning of the second Retinex element, namely, using models of spatial vision to guide image reproduction algorithms. Today, there are many different Retinex algorithms. This special section, "Retinex at 50," describes a wide variety of them, along with their different goals, and ground truths used to measure their success. This paper reviews (and provides links to) the original Retinex experiments and image-processing implementations. Observer matches (measuring appearances) have extended our understanding of how human spatial vision works. This paper describes a collection very challenging datasets, accumulated by Land and McCann, for testing algorithms that predict appearance.

  20. Magician Simulator. A Realistic Simulator for Heterogenous Teams of Autonomous Robots

    DTIC Science & Technology

    2011-01-18

    IMU, and LIDAR systems for identifying and tracking mobile OOI at long range (>20m), providing early warnings and allowing neutralization from a... LIDAR and Computer Vision template-based feature tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous...Locali- zation and Mapping ( SLAM ). Our system contains two maps, a physical map and an influence map (location of hostile OOI, explored and unexplored

  1. Consequences of Incorrect Focus Cues in Stereo Displays

    PubMed Central

    Banks, Martin S.; Akeley, Kurt; Hoffman, David M.; Girshick, Ahna R.

    2010-01-01

    Conventional stereo displays produce images in which focus cues – blur and accommodation – are inconsistent with the simulated depth. We have developed new display techniques that allow the presentation of nearly correct focus. Using these techniques, we find that stereo vision is faster and more accurate when focus cues are mostly consistent with simulated depth; furthermore, viewers experience less fatigue when focus cues are correct or nearly correct. PMID:20523910

  2. The effect of body bias of the metal-oxide-semiconductor field-effect transistor in the resistive network on spatial current distribution in a bio-inspired complementary metal-oxide-semiconductor vision chip

    NASA Astrophysics Data System (ADS)

    Kong, Jae-Sung; Hyun, Hyo-Young; Seo, Sang-Ho; Shin, Jang-Kyoo

    2008-11-01

    Complementary metal-oxide-semiconductor (CMOS) vision chips for edge detection based on a resistive circuit have recently been developed. These chips help in the creation of neuromorphic systems of a compact size, high speed of operation, and low power dissipation. The output of the vision chip depends predominantly upon the electrical characteristics of the resistive network which consists of a resistive circuit. In this paper, the body effect of the metal-oxide-semiconductor field-effect transistor for current distribution in a resistive circuit is discussed with a simple model. In order to evaluate the model, two 160 × 120 CMOS vision chips have been fabricated using a standard CMOS technology. The experimental results nicely match our prediction.

  3. Grasping with the eyes of your hands: hapsis and vision modulate hand preference.

    PubMed

    Stone, Kayla D; Gonzalez, Claudia L R

    2014-02-01

    Right-hand preference has been demonstrated for visually guided reaching and grasping. Grasping, however, requires the integration of both visual and haptic cues. To what extent does vision influence hand preference for grasping? Is there a hand preference for haptically guided grasping? Two experiments were designed to address these questions. In Experiment 1, individuals were tested in a reaching-to-grasp task with vision (sighted condition) and with hapsis (blindfolded condition). Participants were asked to put together 3D models using building blocks scattered on a tabletop. The models were simple, composed of ten blocks of three different shapes. Starting condition (Vision-First or Hapsis-First) was counterbalanced among participants. Right-hand preference was greater in visually guided grasping but only in the Vision-First group. Participants who initially built the models while blindfolded (Hapsis-First group) used their right hand significantly less for the visually guided portion of the task. To investigate whether grasping using hapsis modifies subsequent hand preference, participants received an additional haptic experience in a follow-up experiment. While blindfolded, participants manipulated the blocks in a container for 5 min prior to the task. This additional experience did not affect right-hand use on visually guided grasping but had a robust effect on haptically guided grasping. Together, the results demonstrate first that hand preference for grasping is influenced by both vision and hapsis, and second, they highlight how flexible this preference could be when modulated by hapsis.

  4. Surface modeling method for aircraft engine blades by using speckle patterns based on the virtual stereo vision system

    NASA Astrophysics Data System (ADS)

    Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang

    2018-03-01

    A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.

  5. Leveraging the power of partnerships: spreading the vision for a population health care delivery model in western Kenya.

    PubMed

    Mercer, Tim; Gardner, Adrian; Andama, Benjamin; Chesoli, Cleophas; Christoffersen-Deb, Astrid; Dick, Jonathan; Einterz, Robert; Gray, Nick; Kimaiyo, Sylvester; Kamano, Jemima; Maritim, Beryl; Morehead, Kirk; Pastakia, Sonak; Ruhl, Laura; Songok, Julia; Laktabai, Jeremiah

    2018-05-08

    The Academic Model Providing Access to Healthcare (AMPATH) has been a model academic partnership in global health for nearly three decades, leveraging the power of a public-sector academic medical center and the tripartite academic mission - service, education, and research - to the challenges of delivering health care in a low-income setting. Drawing our mandate from the health needs of the population, we have scaled up service delivery for HIV care, and over the last decade, expanded our focus on non-communicable chronic diseases, health system strengthening, and population health more broadly. Success of such a transformative endeavor requires new partnerships, as well as a unification of vision and alignment of strategy among all partners involved. Leveraging the Power of Partnerships and Spreading the Vision for Population Health. We describe how AMPATH built on its collective experience as an academic partnership to support the public-sector health care system, with a major focus on scaling up HIV care in western Kenya, to a system poised to take responsibility for the health of an entire population. We highlight global trends and local contextual factors that led to the genesis of this new vision, and then describe the key tenets of AMPATH's population health care delivery model: comprehensive, integrated, community-centered, and financially sustainable with a path to universal health coverage. Finally, we share how AMPATH partnered with strategic planning and change management experts from the private sector to use a novel approach called a 'Learning Map®' to collaboratively develop and share a vision of population health, and achieve strategic alignment with key stakeholders at all levels of the public-sector health system in western Kenya. We describe how AMPATH has leveraged the power of partnerships to move beyond the traditional disease-specific silos in global health to a model focused on health systems strengthening and population health. Furthermore, we highlight a novel, collaborative tool to communicate our vision and achieve strategic alignment among stakeholders at all levels of the health system. We hope this paper can serve as a roadmap for other global health partners to develop and share transformative visions for improving population health globally.

  6. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    PubMed

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  7. Using an Augmented Reality Device as a Distance-based Vision Aid-Promise and Limitations.

    PubMed

    Kinateder, Max; Gualtieri, Justin; Dunn, Matt J; Jarosz, Wojciech; Yang, Xing-Dong; Cooper, Emily A

    2018-06-06

    For people with limited vision, wearable displays hold the potential to digitally enhance visual function. As these display technologies advance, it is important to understand their promise and limitations as vision aids. The aim of this study was to test the potential of a consumer augmented reality (AR) device for improving the functional vision of people with near-complete vision loss. An AR application that translates spatial information into high-contrast visual patterns was developed. Two experiments assessed the efficacy of the application to improve vision: an exploratory study with four visually impaired participants and a main controlled study with participants with simulated vision loss (n = 48). In both studies, performance was tested on a range of visual tasks (identifying the location, pose and gesture of a person, identifying objects, and moving around in an unfamiliar space). Participants' accuracy and confidence were compared on these tasks with and without augmented vision, as well as their subjective responses about ease of mobility. In the main study, the AR application was associated with substantially improved accuracy and confidence in object recognition (all P < .001) and to a lesser degree in gesture recognition (P < .05). There was no significant change in performance on identifying body poses or in subjective assessments of mobility, as compared with a control group. Consumer AR devices may soon be able to support applications that improve the functional vision of users for some tasks. In our study, both artificially impaired participants and participants with near-complete vision loss performed tasks that they could not do without the AR system. Current limitations in system performance and form factor, as well as the risk of overconfidence, will need to be overcome.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

  8. The loss and recovery of vertebrate vision examined in microplates.

    PubMed

    Thorn, Robert J; Clift, Danielle E; Ojo, Oladele; Colwill, Ruth M; Creton, Robbert

    2017-01-01

    Regenerative medicine offers potentially ground-breaking treatments of blindness and low vision. However, as new methodologies are developed, a critical question will need to be addressed: how do we monitor in vivo for functional success? In the present study, we developed novel behavioral assays to examine vision in a vertebrate model system. In the assays, zebrafish larvae are imaged in multiwell or multilane plates while various red, green, blue, yellow or cyan objects are presented to the larvae on a computer screen. The assays were used to examine a loss of vision at 4 or 5 days post-fertilization and a gradual recovery of vision in subsequent days. The developed assays are the first to measure the loss and recovery of vertebrate vision in microplates and provide an efficient platform to evaluate novel treatments of visual impairment.

  9. Vision in laboratory rodents-Tools to measure it and implications for behavioral research.

    PubMed

    Leinonen, Henri; Tanila, Heikki

    2017-07-29

    Mice and rats are nocturnal mammals and their vision is specialized for detection of motion and contrast in dim light conditions. These species possess a large proportion of UV-sensitive cones in their retinas and the majority of their optic nerve axons target superior colliculus rather than visual cortex. Therefore, it was a widely held belief that laboratory rodents hardly utilize vision during day-time behavior. This dogma is being questioned as accumulating evidence suggests that laboratory rodents are able to perform complex visual functions, such as perceiving subjective contours, and that declined vision may affect their performance in many behavioral tasks. For instance, genetic engineering may have unexpected consequences on vision as mouse models of Alzheimer's and Huntington's diseases have declined visual function. Rodent vision can be tested in numerous ways using operant training or reflex-based behavioral tasks, or alternatively using electrophysiological recordings. In this article, we will first provide a summary of visual system and explain its characteristics unique to rodents. Then, we present well-established techniques to test rodent vision, with an emphasis on pattern vision: visual water test, optomotor reflex test, pattern electroretinography and pattern visual evoked potentials. Finally, we highlight the importance of visual phenotyping in rodents. As the number of genetically engineered rodent models and volume of behavioral testing increase simultaneously, the possibility of visual dysfunctions needs to be addressed. Neglect in this matter potentially leads to crude biases in the field of neuroscience and beyond. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Why is tractable vision loss in older people being missed? Qualitative study.

    PubMed

    Kharicha, Kalpa; Iliffe, Steve; Myerson, Sybil

    2013-07-16

    There is compelling evidence that there is substantial undetected vision loss amongst older people. Early recognition of undetected vision loss and timely referral for treatment might be possible within general practice, but methods of identifying those with unrecognised vision loss and persuading them to take up services that will potentially improve their eyesight and quality of life are not well understood. Population screening does not lead to improved vision in the older population. The aim of this study is to understand why older people with vision loss respond (or not) to their deteriorating eyesight. Focus groups and interviews were carried out with 76 people aged 65 and over from one general practice in London who had taken part in an earlier study of health risk appraisal. An analytic induction approach was used to analyse the data. Three polarised themes emerged from the groups and interviews. 1) The capacity of individuals to take decisions and act on them effectively versus a collection of factors which acted as obstacles to older people taking care of their eyesight. 2) The belief that prevention is better than cure versus the view that deteriorating vision is an inevitable part of old age. 3) The incongruence between the professionalism and personalised approach of opticians and the commercialisation of their services. The reasons why older people may not seek help for deteriorating vision can be explained in a model in which psychological attributes, costs to the individual and judgments about normal ageing interact. Understanding this model may help clinical decision making and health promotion efforts.

  11. Algorithmic commonalities in the parallel environment

    NASA Technical Reports Server (NTRS)

    Mcanulty, Michael A.; Wainer, Michael S.

    1987-01-01

    The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.

  12. Application of Multi-task Lasso Regression in the Stellar Parametrization

    NASA Astrophysics Data System (ADS)

    Chang, L. N.; Zhang, P. A.

    2015-01-01

    The multi-task learning approaches have attracted the increasing attention in the fields of machine learning, computer vision, and artificial intelligence. By utilizing the correlations in tasks, learning multiple related tasks simultaneously is better than learning each task independently. An efficient multi-task Lasso (Least Absolute Shrinkage Selection and Operator) regression algorithm is proposed in this paper to estimate the physical parameters of stellar spectra. It not only makes different physical parameters share the common features, but also can effectively preserve their own peculiar features. Experiments were done based on the ELODIE data simulated with the stellar atmospheric simulation model, and on the SDSS data released by the American large survey Sloan. The precision of the model is better than those of the methods in the related literature, especially for the acceleration of gravity (lg g) and the chemical abundance ([Fe/H]). In the experiments, we changed the resolution of the spectrum, and applied the noises with different signal-to-noise ratio (SNR) to the spectrum, so as to illustrate the stability of the model. The results show that the model is influenced by both the resolution and the noise. But the influence of the noise is larger than that of the resolution. In general, the multi-task Lasso regression algorithm is easy to operate, has a strong stability, and also can improve the overall accuracy of the model.

  13. Rolling friction and energy dissipation in a spinning disc

    PubMed Central

    Ma, Daolin; Liu, Caishan; Zhao, Zhen; Zhang, Hongjian

    2014-01-01

    This paper presents the results of both experimental and theoretical investigations for the dynamics of a steel disc spinning on a horizontal rough surface. With a pair of high-speed cameras, a stereoscopic vision method is adopted to perform omnidirectional measurements for the temporal evolution of the disc's motion. The experiment data allow us to detail the dynamics of the disc, and consequently to quantify its energy. From our experimental observations, it is confirmed that rolling friction is a primary factor responsible for the dissipation of the energy. Furthermore, a mathematical model, in which the rolling friction is characterized by a resistance torque proportional to the square of precession rate, is also proposed. By employing the model, we perform qualitative analysis and numerical simulations. Both of them provide results that precisely agree with our experimental findings. PMID:25197246

  14. A multi-stage color model revisited: implications for a gene therapy cure for red-green colorblindness.

    PubMed

    Mancuso, Katherine; Mauck, Matthew C; Kuchenbecker, James A; Neitz, Maureen; Neitz, Jay

    2010-01-01

    In 1993, DeValois and DeValois proposed a 'multi-stage color model' to explain how the cortex is ultimately able to deconfound the responses of neurons receiving input from three cone types in order to produce separate red-green and blue-yellow systems, as well as segregate luminance percepts (black-white) from color. This model extended the biological implementation of Hurvich and Jameson's Opponent-Process Theory of color vision, a two-stage model encompassing the three cone types combined in a later opponent organization, which has been the accepted dogma in color vision. DeValois' model attempts to satisfy the long-remaining question of how the visual system separates luminance information from color, but what are the cellular mechanisms that establish the complicated neural wiring and higher-order operations required by the Multi-stage Model? During the last decade and a half, results from molecular biology have shed new light on the evolution of primate color vision, thus constraining the possibilities for the visual circuits. The evolutionary constraints allow for an extension of DeValois' model that is more explicit about the biology of color vision circuitry, and it predicts that human red-green colorblindness can be cured using a retinal gene therapy approach to add the missing photopigment, without any additional changes to the post-synaptic circuitry.

  15. A Logical Basis In The Layered Computer Vision Systems Model

    NASA Astrophysics Data System (ADS)

    Tejwani, Y. J.

    1986-03-01

    In this paper a four layer computer vision system model is described. The model uses a finite memory scratch pad. In this model planar objects are defined as predicates. Predicates are relations on a k-tuple. The k-tuple consists of primitive points and relationship between primitive points. The relationship between points can be of the direct type or the indirect type. Entities are goals which are satisfied by a set of clauses. The grammar used to construct these clauses is examined.

  16. Tensor network simulation of QED on infinite lattices: Learning from (1 +1 ) d , and prospects for (2 +1 ) d

    NASA Astrophysics Data System (ADS)

    Zapp, Kai; Orús, Román

    2017-06-01

    The simulation of lattice gauge theories with tensor network (TN) methods is becoming increasingly fruitful. The vision is that such methods will, eventually, be used to simulate theories in (3 +1 ) dimensions in regimes difficult for other methods. So far, however, TN methods have mostly simulated lattice gauge theories in (1 +1 ) dimensions. The aim of this paper is to explore the simulation of quantum electrodynamics (QED) on infinite lattices with TNs, i.e., fermionic matter fields coupled to a U (1 ) gauge field, directly in the thermodynamic limit. With this idea in mind we first consider a gauge-invariant infinite density matrix renormalization group simulation of the Schwinger model—i.e., QED in (1 +1 ) d . After giving a precise description of the numerical method, we benchmark our simulations by computing the subtracted chiral condensate in the continuum, in good agreement with other approaches. Our simulations of the Schwinger model allow us to build intuition about how a simulation should proceed in (2 +1 ) dimensions. Based on this, we propose a variational ansatz using infinite projected entangled pair states (PEPS) to describe the ground state of (2 +1 ) d QED. The ansatz includes U (1 ) gauge symmetry at the level of the tensors, as well as fermionic (matter) and bosonic (gauge) degrees of freedom both at the physical and virtual levels. We argue that all the necessary ingredients for the simulation of (2 +1 ) d QED are, a priori, already in place, paving the way for future upcoming results.

  17. Heuristics in primary care for recognition of unreported vision loss in older people: a technology development study.

    PubMed

    Wijeyekoon, Skanda; Kharicha, Kalpa; Iliffe, Steve

    2015-09-01

    To evaluate heuristics (rules of thumb) for recognition of undetected vision loss in older patients in primary care. Vision loss is associated with ageing, and its prevalence is increasing. Visual impairment has a broad impact on health, functioning and well-being. Unrecognised vision loss remains common, and screening interventions have yet to reduce its prevalence. An alternative approach is to enhance practitioners' skills in recognising undetected vision loss, by having a more detailed picture of those who are likely not to act on vision changes, report symptoms or have eye tests. This paper describes a qualitative technology development study to evaluate heuristics for recognition of undetected vision loss in older patients in primary care. Using a previous modelling study, two heuristics in the form of mnemonics were developed to aid pattern recognition and allow general practitioners to identify potential cases of unreported vision loss. These heuristics were then analysed with experts. Findings It was concluded that their implementation in modern general practice was unsuitable and an alternative solution should be sort.

  18. Green Infrastructure Simulation and Optimization to Achieve Combined Sewer Overflow Reductions in Philadelphia's Mill Creek Sewershed

    NASA Astrophysics Data System (ADS)

    Cohen, J. S.; McGarity, A. E.

    2017-12-01

    The ability for mass deployment of green stormwater infrastructure (GSI) to intercept significant amounts of urban runoff has the potential to reduce the frequency of a city's combined sewer overflows (CSOs). This study was performed to aid in the Overbrook Environmental Education Center's vision of applying this concept to create a Green Commercial Corridor in Philadelphia's Overbrook Neighborhood, which lies in the Mill Creek Sewershed. In an attempt to further implement physical and social reality into previous work using simulation-optimization techniques to produce GSI deployment strategies (McGarity, et al., 2016), this study's models incorporated land use types and a specific neighborhood in the sewershed. The low impact development (LID) feature in EPA's Storm Water Management Model (SWMM) was used to simulate various geographic configurations of GSI in Overbrook. The results from these simulations were used to obtain formulas describing the annual CSO reduction in the sewershed based on the deployed GSI practices. These non-linear hydrologic response formulas were then implemented into the Storm Water Investment Strategy Evaluation (StormWISE) model (McGarity, 2012), a constrained optimization model used to develop optimal stormwater management practices on the watershed scale. By saturating the avenue with GSI, not only will CSOs from the sewershed into the Schuylkill River be reduced, but ancillary social and economic benefits of GSI will also be achieved. The effectiveness of these ancillary benefits changes based on the type of GSI practice and the type of land use in which the GSI is implemented. Thus, the simulation and optimization processes were repeated while delimiting GSI deployment by land use (residential, commercial, industrial, and transportation). The results give a GSI deployment strategy that achieves desired annual CSO reductions at a minimum cost based on the locations of tree trenches, rain gardens, and rain barrels in specified land use types.

  19. Binocular Vision-Based Position and Pose of Hand Detection and Tracking in Space

    NASA Astrophysics Data System (ADS)

    Jun, Chen; Wenjun, Hou; Qing, Sheng

    After the study of image segmentation, CamShift target tracking algorithm and stereo vision model of space, an improved algorithm based of Frames Difference and a new space point positioning model were proposed, a binocular visual motion tracking system was constructed to verify the improved algorithm and the new model. The problem of the spatial location and pose of the hand detection and tracking have been solved.

  20. Modeling human behaviors and reactions under dangerous environment.

    PubMed

    Kang, J; Wright, D K; Qin, S F; Zhao, Y

    2005-01-01

    This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.

Top