NASA Astrophysics Data System (ADS)
Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.
2012-02-01
Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.
Vision-based navigation in a dynamic environment for virtual human
NASA Astrophysics Data System (ADS)
Liu, Yan; Sun, Ji-Zhou; Zhang, Jia-Wan; Li, Ming-Chu
2004-06-01
Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.
ERIC Educational Resources Information Center
Goldman, Martin
1985-01-01
An elementary physical model of cone receptor cells is explained and applied to complexities of human color vision. One-, two-, and three-receptor systems are considered, with the later shown to be the best model for the human eye. Color blindness is also discussed. (DH)
Networks for image acquisition, processing and display
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1990-01-01
The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.
Karmakar, Sougata; Pal, Madhu Sudan; Majumdar, Deepti; Majumdar, Dhurjati
2012-01-01
Ergonomic evaluation of visual demands becomes crucial for the operators/users when rapid decision making is needed under extreme time constraint like navigation task of jet aircraft. Research reported here comprises ergonomic evaluation of pilot's vision in a jet aircraft in virtual environment to demonstrate how vision analysis tools of digital human modeling software can be used effectively for such study. Three (03) dynamic digital pilot models, representative of smallest, average and largest Indian pilot population were generated from anthropometric database and interfaced with digital prototype of the cockpit in Jack software for analysis of vision within and outside the cockpit. Vision analysis tools like view cones, eye view windows, blind spot area, obscuration zone, reflection zone etc. were employed during evaluation of visual fields. Vision analysis tool was also used for studying kinematic changes of pilot's body joints during simulated gazing activity. From present study, it can be concluded that vision analysis tool of digital human modeling software was found very effective in evaluation of position and alignment of different displays and controls in the workstation based upon their priorities within the visual fields and anthropometry of the targeted users, long before the development of its physical prototype.
Computational models of human vision with applications
NASA Technical Reports Server (NTRS)
Wandell, B. A.
1985-01-01
Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.
Different ranking of avian colors predicted by modeling of retinal function in humans and birds.
Håstad, Olle; Odeen, Anders
2008-06-01
Abstract: Only during the past decade have vision-system-neutral methods become common practice in studies of animal color signals. Consequently, much of the current knowledge on sexual selection is based directly or indirectly on human vision, which may or may not emphasize spectral information in a signal differently from the intended receiver. In an attempt to quantify this discrepancy, we used retinal models to test whether human and bird vision rank plumage colors similarly. Of 67 species, human and bird models disagreed in 26 as to which pair of patches in the plumage provides the strongest color contrast or which male in a random pair is the more colorful. These results were only partly attributable to human UV blindness. Despite confirming a strong correlation between avian and human color discrimination, we conclude that a significant proportion of the information in avian visual signals may be lost in translation.
A physiologically-based model for simulation of color vision deficiency.
Machado, Gustavo M; Oliveira, Manuel M; Fernandes, Leandro A F
2009-01-01
Color vision deficiency (CVD) affects approximately 200 million people worldwide, compromising the ability of these individuals to effectively perform color and visualization-related tasks. This has a significant impact on their private and professional lives. We present a physiologically-based model for simulating color vision. Our model is based on the stage theory of human color vision and is derived from data reported in electrophysiological studies. It is the first model to consistently handle normal color vision, anomalous trichromacy, and dichromacy in a unified way. We have validated the proposed model through an experimental evaluation involving groups of color vision deficient individuals and normal color vision ones. Our model can provide insights and feedback on how to improve visualization experiences for individuals with CVD. It also provides a framework for testing hypotheses about some aspects of the retinal photoreceptors in color vision deficient individuals.
NASA Astrophysics Data System (ADS)
Lauinger, Norbert
2004-10-01
The human eye is a good model for the engineering of optical correlators. Three prominent intelligent functionalities in human vision could in the near future become realized by a new diffractive-optical hardware design of optical imaging sensors: (1) Illuminant-adaptive RGB-based color Vision, (2) Monocular 3D Vision based on RGB data processing, (3) Patchwise fourier-optical Object-Classification and Identification. The hardware design of the human eye has specific diffractive-optical elements (DOE's) in aperture and in image space and seems to execute the three jobs at -- or not far behind -- the loci of the images of objects.
Kriegeskorte, Nikolaus
2015-11-24
Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.
Li, Yi; Chen, Yuren
2016-12-30
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.
Focal damage to macaque photoreceptors produces persistent visual loss
Strazzeri, Jennifer M.; Hunter, Jennifer J.; Masella, Benjamin D.; Yin, Lu; Fischer, William S.; DiLoreto, David A.; Libby, Richard T.; Williams, David R.; Merigan, William H.
2014-01-01
Insertion of light-gated channels into inner retina neurons restores neural light responses, light evoked potentials, visual optomotor responses and visually-guided maze behavior in mice blinded by retinal degeneration. This method of vision restoration bypasses damaged outer retina, providing stimulation directly to retinal ganglion cells in inner retina. The approach is similar to that of electronic visual protheses, but may offer some advantages, such as avoidance of complex surgery and direct targeting of many thousands of neurons. However, the promise of this technique for restoring human vision remains uncertain because rodent animal models, in which it has been largely developed, are not ideal for evaluating visual perception. On the other hand, psychophysical vision studies in macaque can be used to evaluate different approaches to vision restoration in humans. Furthermore, it has not been possible to test vision restoration in macaques, the optimal model for human-like vision, because there has been no macaque model of outer retina degeneration. In this study, we describe development of a macaque model of photoreceptor degeneration that can in future studies be used to test restoration of perception by visual prostheses. Our results show that perceptual deficits caused by focal light damage are restricted to locations at which photoreceptors are damaged, that optical coherence tomography (OCT) can be used to track such lesions, and that adaptive optics retinal imaging, which we recently used for in vivo recording of ganglion cell function, can be used in future studies to examine these lesions. PMID:24316158
Retinex at 50: color theory and spatial algorithms, a review
NASA Astrophysics Data System (ADS)
McCann, John J.
2017-05-01
Retinex Imaging shares two distinct elements: first, a model of human color vision; second, a spatial-imaging algorithm for making better reproductions. Edwin Land's 1964 Retinex Color Theory began as a model of human color vision of real complex scenes. He designed many experiments, such as Color Mondrians, to understand why retinal cone quanta catch fails to predict color constancy. Land's Retinex model used three spatial channels (L, M, S) that calculated three independent sets of monochromatic lightnesses. Land and McCann's lightness model used spatial comparisons followed by spatial integration across the scene. The parameters of their model were derived from extensive observer data. This work was the beginning of the second Retinex element, namely, using models of spatial vision to guide image reproduction algorithms. Today, there are many different Retinex algorithms. This special section, "Retinex at 50," describes a wide variety of them, along with their different goals, and ground truths used to measure their success. This paper reviews (and provides links to) the original Retinex experiments and image-processing implementations. Observer matches (measuring appearances) have extended our understanding of how human spatial vision works. This paper describes a collection very challenging datasets, accumulated by Land and McCann, for testing algorithms that predict appearance.
Pongrácz, Péter; Ujvári, Vera; Faragó, Tamás; Miklósi, Ádám; Péter, András
2017-07-01
The visual sense of dogs is in many aspects different than that of humans. Unfortunately, authors do not explicitly take into consideration dog-human differences in visual perception when designing their experiments. With an image manipulation program we altered stationary images, according to the present knowledge about dog-vision. Besides the effect of dogs' dichromatic vision, the software shows the effect of the lower visual acuity and brightness discrimination, too. Fifty adult humans were tested with pictures showing a female experimenter pointing, gazing or glancing to the left or right side. Half of the pictures were shown after they were altered to a setting that approximated dog vision. Participants had difficulty to find out the direction of glancing when the pictures were in dog-vision mode. Glances in dog-vision setting were followed less correctly and with a slower response time than other cues. Our results are the first that show the visual performance of humans under circumstances that model how dogs' weaker vision would affect their responses in an ethological experiment. We urge researchers to take into consideration the differences between perceptual abilities of dogs and humans, by developing visual stimuli that fit more appropriately to dogs' visual capabilities. Copyright © 2017 Elsevier B.V. All rights reserved.
TOXICITY TESTING IN THE 21ST CENTURY: A VISION AND A STRATEGY
Krewski, Daniel; Acosta, Daniel; Andersen, Melvin; Anderson, Henry; Bailar, John C.; Boekelheide, Kim; Brent, Robert; Charnley, Gail; Cheung, Vivian G.; Green, Sidney; Kelsey, Karl T.; Kerkvliet, Nancy I.; Li, Abby A.; McCray, Lawrence; Meyer, Otto; Patterson, Reid D.; Pennie, William; Scala, Robert A.; Solomon, Gina M.; Stephens, Martin; Yager, James; Zeise, Lauren
2015-01-01
With the release of the landmark report Toxicity Testing in the 21st Century: A Vision and a Strategy, the U.S. National Academy of Sciences, in 2007, precipitated a major change in the way toxicity testing is conducted. It envisions increased efficiency in toxicity testing and decreased animal usage by transitioning from current expensive and lengthy in vivo testing with qualitative endpoints to in vitro toxicity pathway assays on human cells or cell lines using robotic high-throughput screening with mechanistic quantitative parameters. Risk assessment in the exposed human population would focus on avoiding significant perturbations in these toxicity pathways. Computational systems biology models would be implemented to determine the dose-response models of perturbations of pathway function. Extrapolation of in vitro results to in vivo human blood and tissue concentrations would be based on pharmacokinetic models for the given exposure condition. This practice would enhance human relevance of test results, and would cover several test agents, compared to traditional toxicological testing strategies. As all the tools that are necessary to implement the vision are currently available or in an advanced stage of development, the key prerequisites to achieving this paradigm shift are a commitment to change in the scientific community, which could be facilitated by a broad discussion of the vision, and obtaining necessary resources to enhance current knowledge of pathway perturbations and pathway assays in humans and to implement computational systems biology models. Implementation of these strategies would result in a new toxicity testing paradigm firmly based on human biology. PMID:20574894
An Emphasis on Perception: Teaching Image Formation Using a Mechanistic Model of Vision.
ERIC Educational Resources Information Center
Allen, Sue; And Others
An effective way to teach the concept of image is to give students a model of human vision which incorporates a simple mechanism of depth perception. In this study two almost identical versions of a curriculum in geometrical optics were created. One used a mechanistic, interpretive eye model, and in the other the eye was modeled as a passive,…
Use of a vision model to quantify the significance of factors effecting target conspicuity
NASA Astrophysics Data System (ADS)
Gilmore, M. A.; Jones, C. K.; Haynes, A. W.; Tolhurst, D. J.; To, M.; Troscianko, T.; Lovell, P. G.; Parraga, C. A.; Pickavance, K.
2006-05-01
When designing camouflage it is important to understand how the human visual system processes the information to discriminate the target from the background scene. A vision model has been developed to compare two images and detect differences in local contrast in each spatial frequency channel. Observer experiments are being undertaken to validate this vision model so that the model can be used to quantify the relative significance of different factors affecting target conspicuity. Synthetic imagery can be used to design improved camouflage systems. The vision model is being used to compare different synthetic images to understand what features in the image are important to reproduce accurately and to identify the optimum way to render synthetic imagery for camouflage effectiveness assessment. This paper will describe the vision model and summarise the results obtained from the initial validation tests. The paper will also show how the model is being used to compare different synthetic images and discuss future work plans.
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
Zhai, Yi; Wang, Yan; Wang, Zhaoqi; Liu, Yongji; Zhang, Lin; He, Yuanqing; Chang, Shengjiang
2014-01-01
An achromatic element eliminating only longitudinal chromatic aberration (LCA) while maintaining transverse chromatic aberration (TCA) is established for the eye model, which involves the angle formed by the visual and optical axis. To investigate the impacts of higher-order aberrations on vision, the actual data of higher-order aberrations of human eyes with three typical levels are introduced into the eye model along visual axis. Moreover, three kinds of individual eye models are established to investigate the impacts of higher-order aberrations, chromatic aberration (LCA+TCA), LCA and TCA on vision under the photopic condition, respectively. Results show that for most human eyes, the impact of chromatic aberration on vision is much stronger than that of higher-order aberrations, and the impact of LCA in chromatic aberration dominates. The impact of TCA is approximately equal to that of normal level higher-order aberrations and it can be ignored when LCA exists.
Quantum vision in three dimensions
NASA Astrophysics Data System (ADS)
Roth, Yehuda
We present four models for describing a 3-D vision. Similar to the mirror scenario, our models allow 3-D vision with no need for additional accessories such as stereoscopic glasses or a hologram film. These four models are based on brain interpretation rather than pure objective encryption. We consider the observer "subjective" selection of a measuring device and the corresponding quantum collapse into one of his selected states, as a tool for interpreting reality in according to the observer concepts. This is the basic concept of our study and it is introduced in the first model. Other models suggests "soften" versions that might be much easier to implement. Our quantum interpretation approach contribute to the following fields. In technology the proposed models can be implemented into real devices, allowing 3-D vision without additional accessories. Artificial intelligence: In the desire to create a machine that exchange information by using human terminologies, our interpretation approach seems to be appropriate.
Advancing adverse outcome pathways for integrated toxicology and regulatory applications
Recent regulatory efforts in many countries have focused on a toxicological pathway-based vision for human health assessments relying on in vitro systems and predictive models to generate the toxicological data needed to evaluate chemical hazard. A pathway-based vision is equally...
Operator-coached machine vision for space telerobotics
NASA Technical Reports Server (NTRS)
Bon, Bruce; Wilcox, Brian; Litwin, Todd; Gennery, Donald B.
1991-01-01
A prototype system for interactive object modeling has been developed and tested. The goal of this effort has been to create a system which would demonstrate the feasibility of high interactive operator-coached machine vision in a realistic task environment, and to provide a testbed for experimentation with various modes of operator interaction. The purpose for such a system is to use human perception where machine vision is difficult, i.e., to segment the scene into objects and to designate their features, and to use machine vision to overcome limitations of human perception, i.e., for accurate measurement of object geometry. The system captures and displays video images from a number of cameras, allows the operator to designate a polyhedral object one edge at a time by moving a 3-D cursor within these images, performs a least-squares fit of the designated edges to edge data detected with a modified Sobel operator, and combines the edges thus detected to form a wire-frame object model that matches the Sobel data.
Humans and Deep Networks Largely Agree on Which Kinds of Variation Make Object Recognition Harder.
Kheradpisheh, Saeed R; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée
2016-01-01
View-invariant object recognition is a challenging problem that has attracted much attention among the psychology, neuroscience, and computer vision communities. Humans are notoriously good at it, even if some variations are presumably more difficult to handle than others (e.g., 3D rotations). Humans are thought to solve the problem through hierarchical processing along the ventral stream, which progressively extracts more and more invariant visual features. This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best models for object recognition in natural images. Here, for the first time, we systematically compared human feed-forward vision and DCNNs at view-invariant object recognition task using the same set of images and controlling the kinds of transformation (position, scale, rotation in plane, and rotation in depth) as well as their magnitude, which we call "variation level." We used four object categories: car, ship, motorcycle, and animal. In total, 89 human subjects participated in 10 experiments in which they had to discriminate between two or four categories after rapid presentation with backward masking. We also tested two recent DCNNs (proposed respectively by Hinton's group and Zisserman's group) on the same tasks. We found that humans and DCNNs largely agreed on the relative difficulties of each kind of variation: rotation in depth is by far the hardest transformation to handle, followed by scale, then rotation in plane, and finally position (much easier). This suggests that DCNNs would be reasonable models of human feed-forward vision. In addition, our results show that the variation levels in rotation in depth and scale strongly modulate both humans' and DCNNs' recognition performances. We thus argue that these variations should be controlled in the image datasets used in vision research.
Integrating National Space Visions
NASA Technical Reports Server (NTRS)
Sherwood, Brent
2006-01-01
This paper examines value proposition assumptions for various models nations may use to justify, shape, and guide their space programs. Nations organize major societal investments like space programs to actualize national visions represented by leaders as investments in the public good. The paper defines nine 'vision drivers' that circumscribe the motivations evidently underpinning national space programs. It then describes 19 fundamental space activity objectives (eight extant and eleven prospective) that nations already do or could in the future use to actualize the visions they select. Finally the paper presents four contrasting models of engagement among nations, and compares these models to assess realistic pounds on the pace of human progress in space over the coming decades. The conclusion is that orthogonal engagement, albeit unlikely because it is unprecedented, would yield the most robust and rapid global progress.
Collaboration between human and nonhuman players in Night Vision Tactical Trainer-Shadow
NASA Astrophysics Data System (ADS)
Berglie, Stephen T.; Gallogly, James J.
2016-05-01
The Night Vision Tactical Trainer - Shadow (NVTT-S) is a U.S. Army-developed training tool designed to improve critical Manned-Unmanned Teaming (MUMT) communication skills for payload operators in Unmanned Aerial Sensor (UAS) crews. The trainer is composed of several Government Off-The-Shelf (GOTS) simulation components and takes the trainee through a series of escalating engagements using tactically relevant, realistically complex, scenarios involving a variety of manned, unmanned, aerial, and ground-based assets. The trainee is the only human player in the game and he must collaborate, from his web-based mock operating station, with various non-human players via spoken natural language over simulated radio in order to execute the training missions successfully. Non-human players are modeled in two complementary layers - OneSAF provides basic background behaviors for entities while NVTT provides higher level models that control entity actions based on intent extracted from the trainee's spoken natural dialog with game entities. Dialog structure is modeled based on Army standards for communication and verbal protocols. This paper presents an architecture that integrates the U.S. Army's Night Vision Image Generator (NVIG), One Semi- Automated Forces (OneSAF), a flight dynamics model, as well as Commercial Off The Shelf (COTS) speech recognition and text to speech products to effect an environment with sufficient entity counts and fidelity to enable meaningful teaching and reinforcement of critical communication skills. It further demonstrates the model dynamics and synchronization mechanisms employed to execute purpose-built training scenarios, and to achieve ad-hoc collaboration on-the-fly between human and non-human players in the simulated environment.
Classification Objects, Ideal Observers & Generative Models
ERIC Educational Resources Information Center
Olman, Cheryl; Kersten, Daniel
2004-01-01
A successful vision system must solve the problem of deriving geometrical information about three-dimensional objects from two-dimensional photometric input. The human visual system solves this problem with remarkable efficiency, and one challenge in vision research is to understand how neural representations of objects are formed and what visual…
Modeling Interval Temporal Dependencies for Complex Activities Understanding
2013-10-11
ORGANIZATION NAMES AND ADDRESSES U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS Human activity modeling...computer vision applications: human activity recognition and facial activity recognition. The results demonstrate the superior performance of the
Mechanisms, functions and ecology of colour vision in the honeybee.
Hempel de Ibarra, N; Vorobyev, M; Menzel, R
2014-06-01
Research in the honeybee has laid the foundations for our understanding of insect colour vision. The trichromatic colour vision of honeybees shares fundamental properties with primate and human colour perception, such as colour constancy, colour opponency, segregation of colour and brightness coding. Laborious efforts to reconstruct the colour vision pathway in the honeybee have provided detailed descriptions of neural connectivity and the properties of photoreceptors and interneurons in the optic lobes of the bee brain. The modelling of colour perception advanced with the establishment of colour discrimination models that were based on experimental data, the Colour-Opponent Coding and Receptor Noise-Limited models, which are important tools for the quantitative assessment of bee colour vision and colour-guided behaviours. Major insights into the visual ecology of bees have been gained combining behavioural experiments and quantitative modelling, and asking how bee vision has influenced the evolution of flower colours and patterns. Recently research has focussed on the discrimination and categorisation of coloured patterns, colourful scenes and various other groupings of coloured stimuli, highlighting the bees' behavioural flexibility. The identification of perceptual mechanisms remains of fundamental importance for the interpretation of their learning strategies and performance in diverse experimental tasks.
2013-11-01
Acoustic Measurement and Model Predictions for the Aural Nondetectability of Two Night-Vision Goggles by Jeremy Gaston, Tim Mermagen, and...Goggles Jeremy Gaston, Tim Mermagen, and Kelly Dickerson Human Research and Engineering Directorate, ARL...5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Jeremy Gaston, Tim Mermagen, and Kelly Dickerson 5d. PROJECT NUMBER 74A 5e. TASK NUMBER 5f. WORK
Review On Applications Of Neural Network To Computer Vision
NASA Astrophysics Data System (ADS)
Li, Wei; Nasrabadi, Nasser M.
1989-03-01
Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.
Delay effects in the human sensory system during balancing.
Stepan, Gabor
2009-03-28
Mechanical models of human self-balancing often use the Newtonian equations of inverted pendula. While these mathematical models are precise enough on the mechanical side, the ways humans balance themselves are still quite unexplored on the control side. Time delays in the sensory and motoric neural pathways give essential limitations to the stabilization of the human body as a multiple inverted pendulum. The sensory systems supporting each other provide the necessary signals for these control tasks; but the more complicated the system is, the larger delay is introduced. Human ageing as well as our actual physical and mental state affects the time delays in the neural system, and the mechanical structure of the human body also changes in a large range during our lives. The human balancing organ, the labyrinth, and the vision system essentially adapted to these relatively large time delays and parameter regions occurring during balancing. The analytical study of the simplified large-scale time-delayed models of balancing provides a Newtonian insight into the functioning of these organs that may also serve as a basis to support theories and hypotheses on balancing and vision.
Nian, Rui; Liu, Fang; He, Bo
2013-07-16
Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs).
Nian, Rui; Liu, Fang; He, Bo
2013-01-01
Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs). PMID:23863855
Progress in building a cognitive vision system
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Lyons, Damian; Yue, Hong
2016-05-01
We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.
Fast ray-tracing of human eye optics on Graphics Processing Units.
Wei, Qi; Patkar, Saket; Pai, Dinesh K
2014-05-01
We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Perceptual organization in computer vision - A review and a proposal for a classificatory structure
NASA Technical Reports Server (NTRS)
Sarkar, Sudeep; Boyer, Kim L.
1993-01-01
The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.
Landgraf, Steffen; Osterheider, Michael
2013-01-01
The causes of schizophrenia are still unknown. For the last 100 years, though, both “absent” and “perfect” vision have been associated with a lower risk for schizophrenia. Hence, vision itself and aberrations in visual functioning may be fundamental to the development and etiological explanations of the disorder. In this paper, we present the “Protection-Against-Schizophrenia” (PaSZ) model, which grades the risk for developing schizophrenia as a function of an individual's visual capacity. We review two vision perspectives: (1) “Absent” vision or how congenital blindness contributes to PaSZ and (2) “perfect” vision or how aberrations in visual functioning are associated with psychosis. First, we illustrate that, although congenitally blind and sighted individuals acquire similar world representations, blind individuals compensate for behavioral shortcomings through neurofunctional and multisensory reorganization. These reorganizations may indicate etiological explanations for their PaSZ. Second, we demonstrate that visuo-cognitive impairments are fundamental for the development of schizophrenia. Deteriorated visual information acquisition and processing contribute to higher-order cognitive dysfunctions and subsequently to schizophrenic symptoms. Finally, we provide different specific therapeutic recommendations for individuals who suffer from visual impairments (who never developed “normal” vision) and individuals who suffer from visual deterioration (who previously had “normal” visual skills). Rather than categorizing individuals as “normal” and “mentally disordered,” the PaSZ model uses a continuous scale to represent psychiatrically relevant human behavior. This not only provides a scientific basis for more fine-grained diagnostic assessments, earlier detection, and more appropriate therapeutic assignments, but it also outlines a trajectory for unraveling the causes of abnormal psychotic human self- and world-perception. PMID:23847557
Comparing visual representations across human fMRI and computational vision
Leeds, Daniel D.; Seibert, Darren A.; Pyles, John A.; Tarr, Michael J.
2013-01-01
Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from “interest points,” was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation. PMID:24273227
Are visual peripheries forever young?
Burnat, Kalina
2015-01-01
The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.
Computing Visible-Surface Representations,
1985-03-01
Terzopoulos N00014-75-C-0643 9. PERFORMING ORGANIZATION NAME AMC ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Artificial Inteligence Laboratory AREA A...Massachusetts Institute of lechnolog,. Support lbr the laboratory’s Artificial Intelligence research is provided in part by the Advanced Rtccarcl Proj...dynamically maintaining visible surface representations. Whether the intention is to model human vision or to design competent artificial vision systems
Vision Algorithms Catch Defects in Screen Displays
NASA Technical Reports Server (NTRS)
2014-01-01
Andrew Watson, a senior scientist at Ames Research Center, developed a tool called the Spatial Standard Observer (SSO), which models human vision for use in robotic applications. Redmond, Washington-based Radiant Zemax LLC licensed the technology from NASA and combined it with its imaging colorimeter system, creating a powerful tool that high-volume manufacturers of flat-panel displays use to catch defects in screens.
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2003-08-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.
A vision and strategy for the virtual physiological human in 2010 and beyond.
Hunter, Peter; Coveney, Peter V; de Bono, Bernard; Diaz, Vanessa; Fenner, John; Frangi, Alejandro F; Harris, Peter; Hose, Rod; Kohl, Peter; Lawford, Pat; McCormack, Keith; Mendes, Miriam; Omholt, Stig; Quarteroni, Alfio; Skår, John; Tegner, Jesper; Randall Thomas, S; Tollis, Ioannis; Tsamardinos, Ioannis; van Beek, Johannes H G M; Viceconti, Marco
2010-06-13
European funding under framework 7 (FP7) for the virtual physiological human (VPH) project has been in place now for nearly 2 years. The VPH network of excellence (NoE) is helping in the development of common standards, open-source software, freely accessible data and model repositories, and various training and dissemination activities for the project. It is also helping to coordinate the many clinically targeted projects that have been funded under the FP7 calls. An initial vision for the VPH was defined by framework 6 strategy for a European physiome (STEP) project in 2006. It is now time to assess the accomplishments of the last 2 years and update the STEP vision for the VPH. We consider the biomedical science, healthcare and information and communications technology challenges facing the project and we propose the VPH Institute as a means of sustaining the vision of VPH beyond the time frame of the NoE.
A vision and strategy for the virtual physiological human in 2010 and beyond
Hunter, Peter; Coveney, Peter V.; de Bono, Bernard; Diaz, Vanessa; Fenner, John; Frangi, Alejandro F.; Harris, Peter; Hose, Rod; Kohl, Peter; Lawford, Pat; McCormack, Keith; Mendes, Miriam; Omholt, Stig; Quarteroni, Alfio; Skår, John; Tegner, Jesper; Randall Thomas, S.; Tollis, Ioannis; Tsamardinos, Ioannis; van Beek, Johannes H. G. M.; Viceconti, Marco
2010-01-01
European funding under framework 7 (FP7) for the virtual physiological human (VPH) project has been in place now for nearly 2 years. The VPH network of excellence (NoE) is helping in the development of common standards, open-source software, freely accessible data and model repositories, and various training and dissemination activities for the project. It is also helping to coordinate the many clinically targeted projects that have been funded under the FP7 calls. An initial vision for the VPH was defined by framework 6 strategy for a European physiome (STEP) project in 2006. It is now time to assess the accomplishments of the last 2 years and update the STEP vision for the VPH. We consider the biomedical science, healthcare and information and communications technology challenges facing the project and we propose the VPH Institute as a means of sustaining the vision of VPH beyond the time frame of the NoE. PMID:20439264
A methodology for coupling a visual enhancement device to human visual attention
NASA Astrophysics Data System (ADS)
Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman
2009-02-01
The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.
The Visual System of Zebrafish and its Use to Model Human Ocular Diseases
Gestri, Gaia; Link, Brian A; Neuhauss, Stephan CF
2011-01-01
Free swimming zebrafish larvae depend mainly on their sense of vision to evade predation and to catch prey. Hence there is strong selective pressure on the fast maturation of visual function and indeed the visual system already supports a number of visually-driven behaviors in the newly hatched larvae. The ability to exploit the genetic and embryonic accessibility of the zebrafish in combination with a behavioral assessment of visual system function has made the zebrafish a popular model to study vision and its diseases. Here, we review the anatomy, physiology and development of the zebrafish eye as the basis to relate the contributions of the zebrafish to our understanding of human ocular diseases. PMID:21595048
Color vision test for dichromatic and trichromatic macaque monkeys.
Koida, Kowa; Yokoi, Isao; Okazawa, Gouki; Mikami, Akichika; Widayati, Kanthi Arum; Miyachi, Shigehiro; Komatsu, Hidehiko
2013-11-01
Dichromacy is a color vision defect in which one of the three cone photoreceptors is absent. Individuals with dichromacy are called dichromats (or sometimes "color-blind"), and their color discrimination performance has contributed significantly to our understanding of color vision. Macaque monkeys, which normally have trichromatic color vision that is nearly identical to humans, have been used extensively in neurophysiological studies of color vision. In the present study we employed two tests, a pseudoisochromatic color discrimination test and a monochromatic light detection test, to compare the color vision of genetically identified dichromatic macaques (Macaca fascicularis) with that of normal trichromatic macaques. In the color discrimination test, dichromats could not discriminate colors along the protanopic confusion line, though trichromats could. In the light detection test, the relative thresholds for longer wavelength light were higher in the dichromats than the trichromats, indicating dichromats to be less sensitive to longer wavelength light. Because the dichromatic macaque is very rare, the present study provides valuable new information on the color vision behavior of dichromatic macaques, which may be a useful animal model of human dichromacy. The behavioral tests used in the present study have been previously used to characterize the color behaviors of trichromatic as well as dichromatic new world monkeys. The present results show that comparative studies of color vision employing similar tests may be feasible to examine the difference in color behaviors between trichromatic and dichromatic individuals, although the genetic mechanisms of trichromacy/dichromacy is quite different between new world monkeys and macaques.
High-dynamic-range scene compression in humans
NASA Astrophysics Data System (ADS)
McCann, John J.
2006-02-01
Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.
A multiscale Markov random field model in wavelet domain for image segmentation
NASA Astrophysics Data System (ADS)
Dai, Peng; Cheng, Yu; Wang, Shengchun; Du, Xinyu; Wu, Dan
2017-07-01
The human vision system has abilities for feature detection, learning and selective attention with some properties of hierarchy and bidirectional connection in the form of neural population. In this paper, a multiscale Markov random field model in the wavelet domain is proposed by mimicking some image processing functions of vision system. For an input scene, our model provides its sparse representations using wavelet transforms and extracts its topological organization using MRF. In addition, the hierarchy property of vision system is simulated using a pyramid framework in our model. There are two information flows in our model, i.e., a bottom-up procedure to extract input features and a top-down procedure to provide feedback controls. The two procedures are controlled simply by two pyramidal parameters, and some Gestalt laws are also integrated implicitly. Equipped with such biological inspired properties, our model can be used to accomplish different image segmentation tasks, such as edge detection and region segmentation.
Human infrared vision is triggered by two-photon chromophore isomerization
Palczewska, Grazyna; Vinberg, Frans; Stremplewski, Patrycjusz; Bircher, Martin P.; Salom, David; Komar, Katarzyna; Zhang, Jianye; Cascella, Michele; Wojtkowski, Maciej; Kefalov, Vladimir J.; Palczewski, Krzysztof
2014-01-01
Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments. PMID:25453064
Clinically Normal Stereopsis Does Not Ensure Performance Benefit from Stereoscopic 3D Depth Cues
2014-10-28
Stereopsis, Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 16...Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 1 Distribution A: Approved
Episodic Reasoning for Vision-Based Human Action Recognition
Martinez-del-Rincon, Jesus
2014-01-01
Smart Spaces, Ambient Intelligence, and Ambient Assisted Living are environmental paradigms that strongly depend on their capability to recognize human actions. While most solutions rest on sensor value interpretations and video analysis applications, few have realized the importance of incorporating common-sense capabilities to support the recognition process. Unfortunately, human action recognition cannot be successfully accomplished by only analyzing body postures. On the contrary, this task should be supported by profound knowledge of human agency nature and its tight connection to the reasons and motivations that explain it. The combination of this knowledge and the knowledge about how the world works is essential for recognizing and understanding human actions without committing common-senseless mistakes. This work demonstrates the impact that episodic reasoning has in improving the accuracy of a computer vision system for human action recognition. This work also presents formalization, implementation, and evaluation details of the knowledge model that supports the episodic reasoning. PMID:24959602
Smart unattended sensor networks with scene understanding capabilities
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2006-05-01
Unattended sensor systems are new technologies that are supposed to provide enhanced situation awareness to military and law enforcement agencies. A network of such sensors cannot be very effective in field conditions only if it can transmit visual information to human operators or alert them on motion. In the real field conditions, events may happen in many nodes of a network simultaneously. But the real number of control personnel is always limited, and attention of human operators can be simply attracted to particular network nodes, while more dangerous threat may be unnoticed at the same time in the other nodes. Sensor networks would be more effective if equipped with a system that is similar to human vision in its abilities to understand visual information. Human vision uses for that a rough but wide peripheral system that tracks motions and regions of interests, narrow but precise foveal vision that analyzes and recognizes objects in the center of selected region of interest, and visual intelligence that provides scene and object contexts and resolves ambiguity and uncertainty in the visual information. Biologically-inspired Network-Symbolic models convert image information into an 'understandable' Network-Symbolic format, which is similar to relational knowledge models. The equivalent of interaction between peripheral and foveal systems in the network-symbolic system is achieved via interaction between Visual and Object Buffers and the top-level knowledge system.
Image/video understanding systems based on network-symbolic models
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2004-03-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.
Autonomous spacecraft landing through human pre-attentive vision.
Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; de Croon, Guido C H E
2012-06-01
In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting.
Perceptual learning in a non-human primate model of artificial vision
Killian, Nathaniel J.; Vurro, Milena; Keith, Sarah B.; Kyada, Margee J.; Pezaris, John S.
2016-01-01
Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation. PMID:27874058
An interactive framework for acquiring vision models of 3-D objects from 2-D images.
Motai, Yuichi; Kak, Avinash
2004-02-01
This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.
Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffith, Douglas; Greitzer, Frank L.
We re-address the vision of human-computer symbiosis expressed by J. C. R. Licklider nearly a half-century ago, when he wrote: “The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” (Licklider, 1960). Unfortunately, little progress was made toward this vision over four decades following Licklider’s challenge, despite significant advancements in the fields of human factors and computer science. Licklider’s vision wasmore » largely forgotten. However, recent advances in information science and technology, psychology, and neuroscience have rekindled the potential of making the Licklider’s vision a reality. This paper provides a historical context for and updates the vision, and it argues that such a vision is needed as a unifying framework for advancing IS&T.« less
Human performance models for computer-aided engineering
NASA Technical Reports Server (NTRS)
Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)
1989-01-01
This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.
Feedforward object-vision models only tolerate small image variations compared to human
Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2014-01-01
Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986
NASA Astrophysics Data System (ADS)
Kuvychko, Igor
2001-10-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.
Chinellato, Eris; Del Pobil, Angel P
2009-06-01
The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.
Course for undergraduate students: analysis of the retinal image quality of a human eye model
NASA Astrophysics Data System (ADS)
del Mar Pérez, Maria; Yebra, Ana; Fernández-Oliveras, Alicia; Ghinea, Razvan; Ionescu, Ana M.; Cardona, Juan C.
2014-07-01
In teaching of Vision Physics or Physiological Optics, the knowledge and analysis of the aberration that the human eye presents are of great interest, since this information allows a proper evaluation of the quality of the retinal image. The objective of the present work is that the students acquire the required competencies which will allow them to evaluate the optical quality of the human visual system for emmetropic and ammetropic eye, both with and without the optical compensation. For this purpose, an optical system corresponding to the Navarro-Escudero eye model, which allows calculating and evaluating the aberration of this eye model in different ammetropic conditions, was developed employing the OSLO LT software. The optical quality of the visual system will be assessed through determinations of the third and fifth order aberration coefficients, the impact diagram, wavefront analysis, calculation of the Point Spread Function and the Modulation Transfer Function for ammetropic individuals, with myopia or hyperopia, both with or without the optical compensation. This course is expected to be of great interest for student of Optics and Optometry Sciences, last courses of Physics or medical sciences related with human vision.
A self-learning camera for the validation of highly variable and pseudorandom patterns
NASA Astrophysics Data System (ADS)
Kelley, Michael
2004-05-01
Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.
Gradient parameter and axial and field rays in the gradient-index crystalline lens model
NASA Astrophysics Data System (ADS)
Pérez, M. V.; Bao, C.; Flores-Arias, M. T.; Rama, M. A.; Gómez-Reino, C.
2003-09-01
Gradient-index models of the human lens have received wide attention in optometry and vision sciences for considering how changes in the refractive index profile with age and accommodation may affect refractive power. This paper uses the continuous asymmetric bi-elliptical model to determine gradient parameter and axial and field rays of the human lens in order to study the paraxial propagation of light through the crystalline lens of the eye.
Mancuso, Katherine; Mauck, Matthew C; Kuchenbecker, James A; Neitz, Maureen; Neitz, Jay
2010-01-01
In 1993, DeValois and DeValois proposed a 'multi-stage color model' to explain how the cortex is ultimately able to deconfound the responses of neurons receiving input from three cone types in order to produce separate red-green and blue-yellow systems, as well as segregate luminance percepts (black-white) from color. This model extended the biological implementation of Hurvich and Jameson's Opponent-Process Theory of color vision, a two-stage model encompassing the three cone types combined in a later opponent organization, which has been the accepted dogma in color vision. DeValois' model attempts to satisfy the long-remaining question of how the visual system separates luminance information from color, but what are the cellular mechanisms that establish the complicated neural wiring and higher-order operations required by the Multi-stage Model? During the last decade and a half, results from molecular biology have shed new light on the evolution of primate color vision, thus constraining the possibilities for the visual circuits. The evolutionary constraints allow for an extension of DeValois' model that is more explicit about the biology of color vision circuitry, and it predicts that human red-green colorblindness can be cured using a retinal gene therapy approach to add the missing photopigment, without any additional changes to the post-synaptic circuitry.
NASA Astrophysics Data System (ADS)
Putnam, Nicole Marie
In order to study the limits of spatial vision in normal human subjects, it is important to look at and near the fovea. The fovea is the specialized part of the retina, the light-sensitive multi-layered neural tissue that lines the inner surface of the human eye, where the cone photoreceptors are smallest (approximately 2.5 microns or 0.5 arcmin) and cone density reaches a peak. In addition, there is a 1:1 mapping from the photoreceptors to the brain in this central region of the retina. As a result, the best spatial sampling is achieved in the fovea and it is the retinal location used for acuity and spatial vision tasks. However, vision is typically limited by the blur induced by the normal optics of the eye and clinical tests of foveal vision and foveal imaging are both limited due to the blur. As a result, it is unclear what the perceptual benefit of extremely high cone density is. Cutting-edge imaging technology, specifically Adaptive Optics Scanning Laser Ophthalmoscopy (AOSLO), can be utilized to remove this blur, zoom in, and as a result visualize individual cone photoreceptors throughout the central fovea. This imaging combined with simultaneous image stabilization and targeted stimulus delivery expands our understanding of both the anatomical structure of the fovea on a microscopic scale and the placement of stimuli within this retinal area during visual tasks. The final step is to investigate the role of temporal variables in spatial vision tasks since the eye is in constant motion even during steady fixation. In order to learn more about the fovea, it becomes important to study the effect of this motion on spatial vision tasks. This dissertation steps through many of these considerations, starting with a model of the foveal cone mosaic imaged with AOSLO. We then use this high resolution imaging to compare anatomical and functional markers of the center of the normal human fovea. Finally, we investigate the role of natural and manipulated fixational eye movements in foveal vision, specifically looking at a motion detection task, contrast sensitivity, and image fading.
NASA Astrophysics Data System (ADS)
1985-01-01
A new invention by scientists who have copied the structure of a human eye will help replace a human telescope-watching astronomer with a robot. It will be possible to provide technical vision not only for robot astronomers but also for their industrial fellow robots. So far, an artificial eye with dimensions close to those of a human eye discerns only black-and-white images. But already the second model of the eye is to perceive colors as well. Polymers which are suited for the role of the coat of an eye, lens, and vitreous body were applied. The retina has been replaced with a bundle of the finest glass filaments through which light rays get onto photomultipliers. They can be positioned outside the artificial eye. The main thing is to prevent great losses in the light guide.
A Review on Human Activity Recognition Using Vision-Based Method.
Zhang, Shugang; Wei, Zhiqiang; Nie, Jie; Huang, Lei; Wang, Shuang; Li, Zhen
2017-01-01
Human activity recognition (HAR) aims to recognize activities from a series of observations on the actions of subjects and the environmental conditions. The vision-based HAR research is the basis of many applications including video surveillance, health care, and human-computer interaction (HCI). This review highlights the advances of state-of-the-art activity recognition approaches, especially for the activity representation and classification methods. For the representation methods, we sort out a chronological research trajectory from global representations to local representations, and recent depth-based representations. For the classification methods, we conform to the categorization of template-based methods, discriminative models, and generative models and review several prevalent methods. Next, representative and available datasets are introduced. Aiming to provide an overview of those methods and a convenient way of comparing them, we classify existing literatures with a detailed taxonomy including representation and classification methods, as well as the datasets they used. Finally, we investigate the directions for future research.
A Review on Human Activity Recognition Using Vision-Based Method
Nie, Jie
2017-01-01
Human activity recognition (HAR) aims to recognize activities from a series of observations on the actions of subjects and the environmental conditions. The vision-based HAR research is the basis of many applications including video surveillance, health care, and human-computer interaction (HCI). This review highlights the advances of state-of-the-art activity recognition approaches, especially for the activity representation and classification methods. For the representation methods, we sort out a chronological research trajectory from global representations to local representations, and recent depth-based representations. For the classification methods, we conform to the categorization of template-based methods, discriminative models, and generative models and review several prevalent methods. Next, representative and available datasets are introduced. Aiming to provide an overview of those methods and a convenient way of comparing them, we classify existing literatures with a detailed taxonomy including representation and classification methods, as well as the datasets they used. Finally, we investigate the directions for future research. PMID:29065585
Exploring Human Cognition Using Large Image Databases.
Griffiths, Thomas L; Abbott, Joshua T; Hsu, Anne S
2016-07-01
Most cognitive psychology experiments evaluate models of human cognition using a relatively small, well-controlled set of stimuli. This approach stands in contrast to current work in neuroscience, perception, and computer vision, which have begun to focus on using large databases of natural images. We argue that natural images provide a powerful tool for characterizing the statistical environment in which people operate, for better evaluating psychological theories, and for bringing the insights of cognitive science closer to real applications. We discuss how some of the challenges of using natural images as stimuli in experiments can be addressed through increased sample sizes, using representations from computer vision, and developing new experimental methods. Finally, we illustrate these points by summarizing recent work using large image databases to explore questions about human cognition in four different domains: modeling subjective randomness, defining a quantitative measure of representativeness, identifying prior knowledge used in word learning, and determining the structure of natural categories. Copyright © 2016 Cognitive Science Society, Inc.
Human Systems Integration at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
McCandless, Jeffrey
2017-01-01
The Human Systems Integration Division focuses on the design and operations of complex aerospace systems through analysis, experimentation and modeling. With over a dozen labs and over 120 people, the division conducts research to improve safety, efficiency and mission success. Areas of investigation include applied vision research which will be discussed during this seminar.
Multiscale Enaction Model (MEM): the case of complexity and “context-sensitivity” in vision
Laurent, Éric
2014-01-01
I review the data on human visual perception that reveal the critical role played by non-visual contextual factors influencing visual activity. The global perspective that progressively emerges reveals that vision is sensitive to multiple couplings with other systems whose nature and levels of abstraction in science are highly variable. Contrary to some views where vision is immersed in modular hard-wired modules, rather independent from higher-level or other non-cognitive processes, converging data gathered in this article suggest that visual perception can be theorized in the larger context of biological, physical, and social systems with which it is coupled, and through which it is enacted. Therefore, any attempt to model complexity and multiscale couplings, or to develop a complex synthesis in the fields of mind, brain, and behavior, shall involve a systematic empirical study of both connectedness between systems or subsystems, and the embodied, multiscale and flexible teleology of subsystems. The conceptual model (Multiscale Enaction Model [MEM]) that is introduced in this paper finally relates empirical evidence gathered from psychology to biocomputational data concerning the human brain. Both psychological and biocomputational descriptions of MEM are proposed in order to help fill in the gap between scales of scientific analysis and to provide an account for both the autopoiesis-driven search for information, and emerging perception. PMID:25566115
Restoration of Vision in the pde6β-deficient Dog, a Large Animal Model of Rod-cone Dystrophy
Petit, Lolita; Lhériteau, Elsa; Weber, Michel; Le Meur, Guylène; Deschamps, Jack-Yves; Provost, Nathalie; Mendes-Madeira, Alexandra; Libeau, Lyse; Guihal, Caroline; Colle, Marie-Anne; Moullier, Philippe; Rolling, Fabienne
2012-01-01
Defects in the β subunit of rod cGMP phosphodiesterase 6 (PDE6β) are associated with autosomal recessive retinitis pigmentosa (RP), a childhood blinding disease with early retinal degeneration and vision loss. To date, there is no treatment for this pathology. The aim of this preclinical study was to test recombinant adeno-associated virus (AAV)-mediated gene addition therapy in the rod-cone dysplasia type 1 (rcd1) dog, a large animal model of naturally occurring PDE6β deficiency that strongly resembles the human pathology. A total of eight rcd1 dogs were injected subretinally with AAV2/5RK.cpde6β (n = 4) or AAV2/8RK.cpde6β (n = 4). In vivo and post-mortem morphological analysis showed a significant preservation of the retinal structure in transduced areas of both AAV2/5RK.cpde6β- and AAV2/8RK.cpde6β-treated retinas. Moreover, substantial rod-derived electroretinography (ERG) signals were recorded as soon as 1 month postinjection (35% of normal eyes) and remained stable for at least 18 months (the duration of the study) in treated eyes. Rod-responses were undetectable in untreated contralateral eyes. Most importantly, dim-light vision was restored in all treated rcd1 dogs. These results demonstrate for the first time that gene therapy effectively restores long-term retinal function and vision in a large animal model of autosomal recessive rod-cone dystrophy, and provide great promise for human treatment. PMID:22828504
Restoration of vision in the pde6β-deficient dog, a large animal model of rod-cone dystrophy.
Petit, Lolita; Lhériteau, Elsa; Weber, Michel; Le Meur, Guylène; Deschamps, Jack-Yves; Provost, Nathalie; Mendes-Madeira, Alexandra; Libeau, Lyse; Guihal, Caroline; Colle, Marie-Anne; Moullier, Philippe; Rolling, Fabienne
2012-11-01
Defects in the β subunit of rod cGMP phosphodiesterase 6 (PDE6β) are associated with autosomal recessive retinitis pigmentosa (RP), a childhood blinding disease with early retinal degeneration and vision loss. To date, there is no treatment for this pathology. The aim of this preclinical study was to test recombinant adeno-associated virus (AAV)-mediated gene addition therapy in the rod-cone dysplasia type 1 (rcd1) dog, a large animal model of naturally occurring PDE6β deficiency that strongly resembles the human pathology. A total of eight rcd1 dogs were injected subretinally with AAV2/5RK.cpde6β (n = 4) or AAV2/8RK.cpde6β (n = 4). In vivo and post-mortem morphological analysis showed a significant preservation of the retinal structure in transduced areas of both AAV2/5RK.cpde6β- and AAV2/8RK.cpde6β-treated retinas. Moreover, substantial rod-derived electroretinography (ERG) signals were recorded as soon as 1 month postinjection (35% of normal eyes) and remained stable for at least 18 months (the duration of the study) in treated eyes. Rod-responses were undetectable in untreated contralateral eyes. Most importantly, dim-light vision was restored in all treated rcd1 dogs. These results demonstrate for the first time that gene therapy effectively restores long-term retinal function and vision in a large animal model of autosomal recessive rod-cone dystrophy, and provide great promise for human treatment.
Atoms of recognition in human and computer vision.
Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel
2016-03-08
Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.
The Ecological Research Program (ERP) of the EPA Office of Research and Development has the vision of a comprehensive theory and practice for characterizing, quantifying, and valuing ecosystem services and their relationship to human well-being for environmental decision making. ...
NASA Technical Reports Server (NTRS)
Crouch, Roger
2004-01-01
Viewgraphs on NASA's transition to its vision for space exploration is presented. The topics include: 1) Strategic Directives Guiding the Human Support Technology Program; 2) Progressive Capabilities; 3) A Journey to Inspire, Innovate, and Discover; 4) Risk Mitigation Status Technology Readiness Level (TRL) and Countermeasures Readiness Level (CRL); 5) Biological And Physical Research Enterprise Aligning With The Vision For U.S. Space Exploration; 6) Critical Path Roadmap Reference Missions; 7) Rating Risks; 8) Current Critical Path Roadmap (Draft) Rating Risks: Human Health; 9) Current Critical Path Roadmap (Draft) Rating Risks: System Performance/Efficiency; 10) Biological And Physical Research Enterprise Efforts to Align With Vision For U.S. Space Exploration; 11) Aligning with the Vision: Exploration Research Areas of Emphasis; 12) Code U Efforts To Align With The Vision For U.S. Space Exploration; 13) Types of Critical Path Roadmap Risks; and 14) ISS Human Support Systems Research, Development, and Demonstration. A summary discussing the vision for U.S. space exploration is also provided.
2017-01-01
AFRL-SA-WP-SR-2017-0001 Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision...TITLE AND SUBTITLE Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision 5a. CONTRACT...STINFO COPY NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any
The 3D model control of image processing
NASA Technical Reports Server (NTRS)
Nguyen, An H.; Stark, Lawrence
1989-01-01
Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.
Vision Systems with the Human in the Loop
NASA Astrophysics Data System (ADS)
Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard
2005-12-01
The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.
Relating Standardized Visual Perception Measures to Simulator Visual System Performance
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Sweet, Barbara T.
2013-01-01
Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).
Algorithms and architectures for robot vision
NASA Technical Reports Server (NTRS)
Schenker, Paul S.
1990-01-01
The scope of the current work is to develop practical sensing implementations for robots operating in complex, partially unstructured environments. A focus in this work is to develop object models and estimation techniques which are specific to requirements of robot locomotion, approach and avoidance, and grasp and manipulation. Such problems have to date received limited attention in either computer or human vision - in essence, asking not only how perception is in general modeled, but also what is the functional purpose of its underlying representations. As in the past, researchers are drawing on ideas from both the psychological and machine vision literature. Of particular interest is the development 3-D shape and motion estimates for complex objects when given only partial and uncertain information and when such information is incrementally accrued over time. Current studies consider the use of surface motion, contour, and texture information, with the longer range goal of developing a fused sensing strategy based on these sources and others.
Learning prosthetic vision: a virtual-reality study.
Chen, Spencer C; Hallum, Luke E; Lovell, Nigel H; Suaning, Gregg J
2005-09-01
Acceptance of prosthetic vision will be heavily dependent on the ability of recipients to form useful information from such vision. Training strategies to accelerate learning and maximize visual comprehension would need to be designed in the light of the factors affecting human learning under prosthetic vision. Some of these potential factors were examined in a visual acuity study using the Landolt C optotype under virtual-reality simulation of prosthetic vision. Fifteen normally sighted subjects were tested for 10-20 sessions. Potential learning factors were tested at p < 0.05 with regression models. Learning was most evident across-sessions, though 17% of sessions did express significant within-session trends. Learning was highly concentrated toward a critical range of optotype sizes, and subjects were less capable in identifying the closed optotype (a Landolt C with no gap, forming a closed annulus). Training for implant recipients should target these critical sizes and the closed optotype to extend the limit of visual comprehension. Although there was no evidence that image processing affected overall learning, subjects showed varying personal preferences.
NASA Astrophysics Data System (ADS)
Kim, J.
2016-12-01
Considering high levels of uncertainty, epistemological conflicts over facts and values, and a sense of urgency, normal paradigm-driven science will be insufficient to mobilize people and nation toward sustainability. The conceptual framework to bridge the societal system dynamics with that of natural ecosystems in which humanity operates remains deficient. The key to understanding their coevolution is to understand `self-organization.' Information-theoretic approach may shed a light to provide a potential framework which enables not only to bridge human and nature but also to generate useful knowledge for understanding and sustaining the integrity of ecological-societal systems. How can information theory help understand the interface between ecological systems and social systems? How to delineate self-organizing processes and ensure them to fulfil sustainability? How to evaluate the flow of information from data through models to decision-makers? These are the core questions posed by sustainability science in which visioneering (i.e., the engineering of vision) is an essential framework. Yet, visioneering has neither quantitative measure nor information theoretic framework to work with and teach. This presentation is an attempt to accommodate the framework of self-organizing hierarchical open systems with visioneering into a common information-theoretic framework. A case study is presented with the UN/FAO's communal vision of climate-smart agriculture (CSA) which pursues a trilemma of efficiency, mitigation, and resilience. Challenges of delineating and facilitating self-organizing systems are discussed using transdisciplinary toold such as complex systems thinking, dynamic process network analysis and multi-agent systems modeling. Acknowledgments: This study was supported by the Korea Meteorological Administration Research and Development Program under Grant KMA-2012-0001-A (WISE project).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raj, Sunny; Jha, Sumit Kumar; Pullum, Laura L.
Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on themore » pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.« less
Convergence of Human Genetics and Animal Studies: Gene Therapy for X-Linked Retinoschisis
Bush, Ronald A.; Wei, Lisa L.; Sieving, Paul A.
2015-01-01
Retinoschisis is an X-linked recessive genetic disease that leads to vision loss in males. X-linked retinoschisis (XLRS) typically affects young males; however, progressive vision loss continues throughout life. Although discovered in 1898 by Haas in two brothers, the underlying biology leading to blindness has become apparent only in the last 15 years with the advancement of human genetic analyses, generation of XLRS animal models, and the development of ocular monitoring methods such as the electroretinogram and optical coherence tomography. It is now recognized that retinoschisis results from cyst formations within the retinal layers that interrupt normal visual neurosignaling and compromise structural integrity. Mutations in the human retinoschisin gene have been correlated with disease severity of the human XLRS phenotype. Introduction of a normal human retinoschisin cDNA into retinoschisin knockout mice restores retinal structure and improves neural function, providing proof-of-concept that gene replacement therapy is a plausible treatment for XLRS. PMID:26101206
Connectionist model-based stereo vision for telerobotics
NASA Technical Reports Server (NTRS)
Hoff, William; Mathis, Donald
1989-01-01
Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.
The adaptive value of primate color vision for predator detection.
Pessoa, Daniel Marques Almeida; Maia, Rafael; de Albuquerque Ajuz, Rafael Cavalcanti; De Moraes, Pedro Zurvaino Palmeira Melo Rosa; Spyrides, Maria Helena Constantino; Pessoa, Valdir Filgueiras
2014-08-01
The complex evolution of primate color vision has puzzled biologists for decades. Primates are the only eutherian mammals that evolved an enhanced capacity for discriminating colors in the green-red part of the spectrum (trichromatism). However, while Old World primates present three types of cone pigments and are routinely trichromatic, most New World primates exhibit a color vision polymorphism, characterized by the occurrence of trichromatic and dichromatic females and obligatory dichromatic males. Even though this has stimulated a prolific line of inquiry, the selective forces and relative benefits influencing color vision evolution in primates are still under debate, with current explanations focusing almost exclusively at the advantages in finding food and detecting socio-sexual signals. Here, we evaluate a previously untested possibility, the adaptive value of primate color vision for predator detection. By combining color vision modeling data on New World and Old World primates, as well as behavioral information from human subjects, we demonstrate that primates exhibiting better color discrimination (trichromats) excel those displaying poorer color visions (dichromats) at detecting carnivoran predators against the green foliage background. The distribution of color vision found in extant anthropoid primates agrees with our results, and may be explained by the advantages of trichromats and dichromats in detecting predators and insects, respectively. © 2014 Wiley Periodicals, Inc.
Machine vision method for online surface inspection of easy open can ends
NASA Astrophysics Data System (ADS)
Mariño, Perfecto; Pastoriza, Vicente; Santamaría, Miguel
2006-10-01
Easy open can end manufacturing process in the food canning sector currently makes use of a manual, non-destructive testing procedure to guarantee can end repair coating quality. This surface inspection is based on a visual inspection made by human inspectors. Due to the high production rate (100 to 500 ends per minute) only a small part of each lot is verified (statistical sampling), then an automatic, online, inspection system, based on machine vision, has been developed to improve this quality control. The inspection system uses a fuzzy model to make the acceptance/rejection decision for each can end from the information obtained by the vision sensor. In this work, the inspection method is presented. This surface inspection system checks the total production, classifies the ends in agreement with an expert human inspector, supplies interpretability to the operators in order to find out the failure causes and reduce mean time to repair during failures, and allows to modify the minimum can end repair coating quality.
Color Vision and Hue Categorization in Young Human Infants
ERIC Educational Resources Information Center
Bornstein, Marc H.; And Others
1976-01-01
The main objective of the present investigations was to determine whether or not young human infants see the physical spectrum in a categorical fashion as human adults and animals who possess color vision regularly do. (Author)
Stereo chromatic contrast sensitivity model to blue-yellow gratings.
Yang, Jiachen; Lin, Yancong; Liu, Yun
2016-03-07
As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.
Optimizing the temporal dynamics of light to human perception.
Rieiro, Hector; Martinez-Conde, Susana; Danielson, Andrew P; Pardo-Vazquez, Jose L; Srivastava, Nishit; Macknik, Stephen L
2012-11-27
No previous research has tuned the temporal characteristics of light-emitting devices to enhance brightness perception in human vision, despite the potential for significant power savings. The role of stimulus duration on perceived contrast is unclear, due to contradiction between the models proposed by Bloch and by Broca and Sulzer over 100 years ago. We propose that the discrepancy is accounted for by the observer's "inherent expertise bias," a type of experimental bias in which the observer's life-long experience with interpreting the sensory world overcomes perceptual ambiguities and biases experimental outcomes. By controlling for this and all other known biases, we show that perceived contrast peaks at durations of 50-100 ms, and we conclude that the Broca-Sulzer effect best describes human temporal vision. We also show that the plateau in perceived brightness with stimulus duration, described by Bloch's law, is a previously uncharacterized type of temporal brightness constancy that, like classical constancy effects, serves to enhance object recognition across varied lighting conditions in natural vision-although this is a constancy effect that normalizes perception across temporal modulation conditions. A practical outcome of this study is that tuning light-emitting devices to match the temporal dynamics of the human visual system's temporal response function will result in significant power savings.
RPGR-Associated Retinal Degeneration in Human X-Linked RP and a Murine Model
Huang, Wei Chieh; Wright, Alan F.; Roman, Alejandro J.; Cideciyan, Artur V.; Manson, Forbes D.; Gewaily, Dina Y.; Schwartz, Sharon B.; Sadigh, Sam; Limberis, Maria P.; Bell, Peter; Wilson, James M.; Swaroop, Anand; Jacobson, Samuel G.
2012-01-01
Purpose. We investigated the retinal disease due to mutations in the retinitis pigmentosa GTPase regulator (RPGR) gene in human patients and in an Rpgr conditional knockout (cko) mouse model. Methods. XLRP patients with RPGR-ORF15 mutations (n = 35, ages at first visit 5–72 years) had clinical examinations, and rod and cone perimetry. Rpgr-cko mice, in which the proximal promoter and first exon were deleted ubiquitously, were back-crossed onto a BALB/c background, and studied with optical coherence tomography and electroretinography (ERG). Retinal histopathology was performed on a subset. Results. Different patterns of rod and cone dysfunction were present in patients. Frequently, there were midperipheral losses with residual rod and cone function in central and peripheral retina. Longitudinal data indicated that central rod loss preceded peripheral rod losses. Central cone-only vision with no peripheral function was a late stage. Less commonly, patients had central rod and cone dysfunction, but preserved, albeit abnormal, midperipheral rod and cone vision. Rpgr-cko mice had progressive retinal degeneration detectable in the first months of life. ERGs indicated relatively equal rod and cone disease. At late stages, there was greater inferior versus superior retinal degeneration. Conclusions. RPGR mutations lead to progressive loss of rod and cone vision, but show different patterns of residual photoreceptor disease expression. Knowledge of the patterns should guide treatment strategies. Rpgr-cko mice had onset of degeneration at relatively young ages and progressive photoreceptor disease. The natural history in this model will permit preclinical proof-of-concept studies to be designed and such studies should advance progress toward human therapy. PMID:22807293
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.
1994-01-01
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
NASA Astrophysics Data System (ADS)
Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.
1994-02-01
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.
Building a Shared Definitional Model of Long Duration Human Spaceflight
NASA Technical Reports Server (NTRS)
Arias, Diana; Orr, Martin; Whitmire, Alexandra; Leveton, Lauren; Sandoval, Luis
2012-01-01
Objective: To establish the need for a shared definitional model of long duration human spaceflight, that would provide a framework and vision to facilitate communication, research and practice In 1956, on the eve of human space travel, Hubertus Strughold first proposed a "simple classification of the present and future stages of manned flight" that identified key factors, risks and developmental stages for the evolutionary journey ahead. As we look to new destinations, we need a current shared working definitional model of long duration human space flight to help guide our path. Here we describe our preliminary findings and outline potential approaches for the future development of a definition and broader classification system
SMALL COLOUR VISION VARIATIONS AND THEIR EFFECT IN VISUAL COLORIMETRY,
COLOR VISION, PERFORMANCE(HUMAN), TEST EQUIPMENT, PERFORMANCE(HUMAN), CORRELATION TECHNIQUES, STATISTICAL PROCESSES, COLORS, ANALYSIS OF VARIANCE, AGING(MATERIALS), COLORIMETRY , BRIGHTNESS, ANOMALIES, PLASTICS, UNITED KINGDOM.
Early vision and focal attention
NASA Astrophysics Data System (ADS)
Julesz, Bela
1991-07-01
At the thirty-year anniversary of the introduction of the technique of computer-generated random-dot stereograms and random-dot cinematograms into psychology, the impact of the technique on brain research and on the study of artificial intelligence is reviewed. The main finding-that stereoscopic depth perception (stereopsis), motion perception, and preattentive texture discrimination are basically bottom-up processes, which occur without the help of the top-down processes of cognition and semantic memory-greatly simplifies the study of these processes of early vision and permits the linking of human perception with monkey neurophysiology. Particularly interesting are the unexpected findings that stereopsis (assumed to be local) is a global process, while texture discrimination (assumed to be a global process, governed by statistics) is local, based on some conspicuous local features (textons). It is shown that the top-down process of "shape (depth) from shading" does not affect stereopsis, and some of the models of machine vision are evaluated. The asymmetry effect of human texture discrimination is discussed, together with recent nonlinear spatial filter models and a novel extension of the texton theory that can cope with the asymmetry problem. This didactic review attempts to introduce the physicist to the field of psychobiology and its problems-including metascientific problems of brain research, problems of scientific creativity, the state of artificial intelligence research (including connectionist neural networks) aimed at modeling brain activity, and the fundamental role of focal attention in mental events.
Machine Vision Giving Eyes to Robots. Resources in Technology.
ERIC Educational Resources Information Center
Technology Teacher, 1990
1990-01-01
This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)
ERIC Educational Resources Information Center
Xie, Jinyu; Huang, Erjia
2010-01-01
Based on a literature review from English language journals related to the field of human resource development (HRD), the conceptual framework for this study was derived from the models developed by American Society for Training and Development (ASTD) for HRD practice. This study compared and analyzed the similarities and differences in HRD roles,…
Single unit approaches to human vision and memory.
Kreiman, Gabriel
2007-08-01
Research on the visual system focuses on using electrophysiology, pharmacology and other invasive tools in animal models. Non-invasive tools such as scalp electroencephalography and imaging allow examining humans but show a much lower spatial and/or temporal resolution. Under special clinical conditions, it is possible to monitor single-unit activity in humans when invasive procedures are required due to particular pathological conditions including epilepsy and Parkinson's disease. We review our knowledge about the visual system and visual memories in the human brain at the single neuron level. The properties of the human brain seem to be broadly compatible with the knowledge derived from animal models. The possibility of examining high-resolution brain activity in conscious human subjects allows investigators to ask novel questions that are challenging to address in animal models.
NASA Astrophysics Data System (ADS)
Rama, María. Angeles; Pérez, María. Victoria; Bao, Carmen; Flores-Arias, María. Teresa; Gómez-Reino, Carlos
2005-05-01
Gradient-index (GRIN) models of the human lens have received wide attention in optometry and vision sciences for considering the effect of inhomogeneity of the refractive index on the optical properties of the lens. This paper uses the continuous asymmetric bi-elliptical model to determine analytically cardinal elements, magnifications and refractive power of the lens by the axial and field rays in order to study the paraxial light propagation through the human lens from its GRIN nature.
Ocular Chromatic Aberrations and Their Effects on Polychromatic Retinal Image Quality
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxiao
Previous studies of ocular chromatic aberrations have concentrated on chromatic difference of focus (CDF). Less is known about the chromatic difference of image position (CDP) in the peripheral retina and no experimental attempt has been made to measure the ocular chromatic difference of magnification (CDM). Consequently, theoretical modelling of human eyes is incomplete. The insufficient knowledge of ocular chromatic aberrations is partially responsible for two unsolved applied vision problems: (1) how to improve vision by correcting ocular chromatic aberration? (2) what is the impact of ocular chromatic aberration on the use of isoluminance gratings as a tool in spatial-color vision?. Using optical ray tracing methods, MTF analysis methods of image quality, and psychophysical methods, I have developed a more complete model of ocular chromatic aberrations and their effects on vision. The ocular CDM was determined psychophysically by measuring the tilt in the apparent frontal parallel plane (AFPP) induced by interocular difference in image wavelength. This experimental result was then used to verify a theoretical relationship between the ocular CDM, the ocular CDF and the entrance pupil of the eye. In the retinal image after correcting the ocular CDF with existing achromatizing methods, two forms of chromatic aberration (CDM and chromatic parallax) were examined. The CDM was predicted by theoretical ray tracing and measured with the same method used to determine ocular CDM. The chromatic parallax was predicted with a nodal ray model and measured with the two-color vernier alignment method. The influence of these two aberrations on polychromatic MTF were calculated. Using this improved model of ocular chromatic aberration, luminance artifacts in the images of isoluminance gratings were calculated. The predicted luminance artifacts were then compared with experimental data from previous investigators. The results show that: (1) A simple relationship exists between two major chromatic aberrations and the location of the pupil; (2) The ocular CDM is measurable and varies among individuals; (3) All existing methods to correct ocular chromatic aberration face another aberration, chromatic parallax, which is inherent in the methodology; (4) Ocular chromatic aberrations have the potential to contaminate psychophysical experimental results on human spatial-color vision.
Competency-based training model for human resource management and development in public sector
NASA Astrophysics Data System (ADS)
Prabawati, I.; Meirinawati; AOktariyanda, T.
2018-01-01
Human Resources (HR) is a very important factor in an organization so that human resources are required to have the ability, skill or competence in order to be able to carry out the vision and mission of the organization. Competence includes a number of attributes attached to the individual which is a combination of knowledge, skills, and behaviors that can be used as a mean to improve performance. Concerned to the demands of human resources that should have the knowledge, skills or abilities, it is necessary to the development of human resources in public organizations. One form of human resource development is Competency-Based Training (CBT). CBT focuses on three issues, namely skills, competencies, and competency standard. There are 5 (five) strategies in the implementation of CBT, namely: organizational scanning, strategic planning, competency profiling, competency gap analysis, and competency development. Finally, through CBT the employees within the organization can reduce or eliminate the differences between existing performance with a potential performance that can improve the knowledge, expertise, and skills that are very supportive in achieving the vision and mission of the organization.
Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems
NASA Technical Reports Server (NTRS)
Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.
1992-01-01
This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.
FReD: the floral reflectance database--a web portal for analyses of flower colour.
Arnold, Sarah E J; Faruq, Samia; Savolainen, Vincent; McOwan, Peter W; Chittka, Lars
2010-12-10
Flower colour is of great importance in various fields relating to floral biology and pollinator behaviour. However, subjective human judgements of flower colour may be inaccurate and are irrelevant to the ecology and vision of the flower's pollinators. For precise, detailed information about the colours of flowers, a full reflectance spectrum for the flower of interest should be used rather than relying on such human assessments. The Floral Reflectance Database (FReD) has been developed to make an extensive collection of such data available to researchers. It is freely available at http://www.reflectance.co.uk. The database allows users to download spectral reflectance data for flower species collected from all over the world. These could, for example, be used in modelling interactions between pollinator vision and plant signals, or analyses of flower colours in various habitats. The database contains functions for calculating flower colour loci according to widely-used models of bee colour space, reflectance graphs of the spectra and an option to search for flowers with similar colours in bee colour space. The Floral Reflectance Database is a valuable new tool for researchers interested in the colours of flowers and their association with pollinator colour vision, containing raw spectral reflectance data for a large number of flower species.
Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.
Xiong, Chunshui; Huang, Lei; Liu, Changping
2014-01-01
Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.
Modelling Subjectivity in Visual Perception of Orientation for Image Retrieval.
ERIC Educational Resources Information Center
Sanchez, D.; Chamorro-Martinez, J.; Vila, M. A.
2003-01-01
Discussion of multimedia libraries and the need for storage, indexing, and retrieval techniques focuses on the combination of computer vision and data mining techniques to model high-level concepts for image retrieval based on perceptual features of the human visual system. Uses fuzzy set theory to measure users' assessments and to capture users'…
Biological basis for space-variant sensor design I: parameters of monkey and human spatial vision
NASA Astrophysics Data System (ADS)
Rojer, Alan S.; Schwartz, Eric L.
1991-02-01
Biological sensor design has long provided inspiration for sensor design in machine vision. However relatively little attention has been paid to the actual design parameters provided by biological systems as opposed to the general nature of biological vision architectures. In the present paper we will provide a review of current knowledge of primate spatial vision design parameters and will present recent experimental and modeling work from our lab which demonstrates that a numerical conformal mapping which is a refinement of our previous complex logarithmic model provides the best current summary of this feature of the primate visual system. In this paper we will review recent work from our laboratory which has characterized some of the spatial architectures of the primate visual system. In particular we will review experimental and modeling studies which indicate that: . The global spatial architecture of primate visual cortex is well summarized by a numerical conformal mapping whose simplest analytic approximation is the complex logarithm function . The columnar sub-structure of primate visual cortex can be well summarized by a model based on a band-pass filtered white noise. We will also refer to ongoing work in our lab which demonstrates that: . The joint columnar/map structure of primate visual cortex can be modeled and summarized in terms of a new algorithm the ''''proto-column'''' algorithm. This work provides a reference-point for current engineering approaches to novel architectures for
First-in-Human Trial of a Novel Suprachoroidal Retinal Prosthesis
Ayton, Lauren N.; Blamey, Peter J.; Guymer, Robyn H.; Luu, Chi D.; Nayagam, David A. X.; Sinclair, Nicholas C.; Shivdasani, Mohit N.; Yeoh, Jonathan; McCombe, Mark F.; Briggs, Robert J.; Opie, Nicholas L.; Villalobos, Joel; Dimitrov, Peter N.; Varsamidis, Mary; Petoe, Matthew A.; McCarthy, Chris D.; Walker, Janine G.; Barnes, Nick; Burkitt, Anthony N.; Williams, Chris E.; Shepherd, Robert K.; Allen, Penelope J.
2014-01-01
Retinal visual prostheses (“bionic eyes”) have the potential to restore vision to blind or profoundly vision-impaired patients. The medical bionic technology used to design, manufacture and implant such prostheses is still in its relative infancy, with various technologies and surgical approaches being evaluated. We hypothesised that a suprachoroidal implant location (between the sclera and choroid of the eye) would provide significant surgical and safety benefits for patients, allowing them to maintain preoperative residual vision as well as gaining prosthetic vision input from the device. This report details the first-in-human Phase 1 trial to investigate the use of retinal implants in the suprachoroidal space in three human subjects with end-stage retinitis pigmentosa. The success of the suprachoroidal surgical approach and its associated safety benefits, coupled with twelve-month post-operative efficacy data, holds promise for the field of vision restoration. Trial Registration Clinicaltrials.gov NCT01603576 PMID:25521292
3D Data Acquisition Platform for Human Activity Understanding
2016-03-02
3D data. The support for the acquisition of such research instrumentation have significantly facilitated our current and future research and educate ...SECURITY CLASSIFICATION OF: In this project, we incorporated motion capture devices, 3D vision sensors, and EMG sensors to cross validate...multimodality data acquisition, and address fundamental research problems of representation and invariant description of 3D data, human motion modeling and
van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W
2010-01-22
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
Gootwine, Elisha; Abu-Siam, Mazen; Obolensky, Alexey; Rosov, Alex; Honig, Hen; Nitzan, Tali; Shirak, Andrey; Ezra-Elia, Raaya; Yamin, Esther; Banin, Eyal; Averbukh, Edward; Hauswirth, William W; Ofri, Ron; Seroussi, Eyal
2017-03-01
Applying CNGA3 gene augmentation therapy to cure a novel causative mutation underlying achromatopsia (ACHM) in sheep. Impaired vision that spontaneously appeared in newborn lambs was characterized by behavioral, electroretinographic (ERG), and histologic techniques. Deep-sequencing reads of an affected lamb and an unaffected lamb were compared within conserved genomic regions orthologous to human genes involved in similar visual impairment. Observed nonsynonymous amino acid substitutions were classified by their deleteriousness score. The putative causative mutation was assessed by producing compound CNGA3 heterozygotes and applying gene augmentation therapy using the orthologous human cDNA. Behavioral assessment revealed day blindness, and subsequent ERG examination showed attenuated photopic responses. Histologic and immunohistochemical examination of affected sheep eyes did not reveal degeneration, and cone photoreceptors expressing CNGA3 were present. Bioinformatics and sequencing analyses suggested a c.1618G>A, p.Gly540Ser substitution in the GMP-binding domain of CNGA3 as the causative mutation. This was confirmed by genetic concordance test and by genetic complementation experiment: All five compound CNGA3 heterozygotes, carrying both p.Arg236* and p.Gly540Ser mutations in CNGA3, were day-blind. Furthermore, subretinal delivery of the intact human CNGA3 gene using an adeno-associated viral vector (AAV) restored photopic vision in two affected p.Gly540Ser homozygous rams. The c.1618G>A, p.Gly540Ser substitution in CNGA3 was identified as the causative mutation for a novel form of ACHM in Awassi sheep. Gene augmentation therapy restored vision in the affected sheep. This novel mutation provides a large-animal model that is valid for most human CNGA3 ACHM patients; the majority of them carry missense rather than premature-termination mutations.
A vision and strategy for the virtual physiological human: 2012 update
Hunter, Peter; Chapman, Tara; Coveney, Peter V.; de Bono, Bernard; Diaz, Vanessa; Fenner, John; Frangi, Alejandro F.; Harris, Peter; Hose, Rod; Kohl, Peter; Lawford, Pat; McCormack, Keith; Mendes, Miriam; Omholt, Stig; Quarteroni, Alfio; Shublaq, Nour; Skår, John; Stroetmann, Karl; Tegner, Jesper; Thomas, S. Randall; Tollis, Ioannis; Tsamardinos, Ioannis; van Beek, Johannes H. G. M.; Viceconti, Marco
2013-01-01
European funding under Framework 7 (FP7) for the virtual physiological human (VPH) project has been in place now for 5 years. The VPH Network of Excellence (NoE) has been set up to help develop common standards, open source software, freely accessible data and model repositories, and various training and dissemination activities for the project. It is also working to coordinate the many clinically targeted projects that have been funded under the FP7 calls. An initial vision for the VPH was defined by the FP6 STEP project in 2006. In 2010, we wrote an assessment of the accomplishments of the first two years of the VPH in which we considered the biomedical science, healthcare and information and communications technology challenges facing the project (Hunter et al. 2010 Phil. Trans. R. Soc. A 368, 2595–2614 (doi:10.1098/rsta.2010.0048)). We proposed that a not-for-profit professional umbrella organization, the VPH Institute, should be established as a means of sustaining the VPH vision beyond the time-frame of the NoE. Here, we update and extend this assessment and in particular address the following issues raised in response to Hunter et al.: (i) a vision for the VPH updated in the light of progress made so far, (ii) biomedical science and healthcare challenges that the VPH initiative can address while also providing innovation opportunities for the European industry, and (iii) external changes needed in regulatory policy and business models to realize the full potential that the VPH has to offer to industry, clinics and society generally. PMID:24427536
A vision and strategy for the virtual physiological human: 2012 update.
Hunter, Peter; Chapman, Tara; Coveney, Peter V; de Bono, Bernard; Diaz, Vanessa; Fenner, John; Frangi, Alejandro F; Harris, Peter; Hose, Rod; Kohl, Peter; Lawford, Pat; McCormack, Keith; Mendes, Miriam; Omholt, Stig; Quarteroni, Alfio; Shublaq, Nour; Skår, John; Stroetmann, Karl; Tegner, Jesper; Thomas, S Randall; Tollis, Ioannis; Tsamardinos, Ioannis; van Beek, Johannes H G M; Viceconti, Marco
2013-04-06
European funding under Framework 7 (FP7) for the virtual physiological human (VPH) project has been in place now for 5 years. The VPH Network of Excellence (NoE) has been set up to help develop common standards, open source software, freely accessible data and model repositories, and various training and dissemination activities for the project. It is also working to coordinate the many clinically targeted projects that have been funded under the FP7 calls. An initial vision for the VPH was defined by the FP6 STEP project in 2006. In 2010, we wrote an assessment of the accomplishments of the first two years of the VPH in which we considered the biomedical science, healthcare and information and communications technology challenges facing the project (Hunter et al. 2010 Phil. Trans. R. Soc. A 368, 2595-2614 (doi:10.1098/rsta.2010.0048)). We proposed that a not-for-profit professional umbrella organization, the VPH Institute, should be established as a means of sustaining the VPH vision beyond the time-frame of the NoE. Here, we update and extend this assessment and in particular address the following issues raised in response to Hunter et al.: (i) a vision for the VPH updated in the light of progress made so far, (ii) biomedical science and healthcare challenges that the VPH initiative can address while also providing innovation opportunities for the European industry, and (iii) external changes needed in regulatory policy and business models to realize the full potential that the VPH has to offer to industry, clinics and society generally.
Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation
Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin
2013-01-01
With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144
Advances in understanding the molecular basis of the first steps in color vision
Hofmann, Lukas; Palczewski, Krzysztof
2015-01-01
Serving as one of our primary environmental inputs, vision is the most sophisticated sensory system in humans. Here, we present recent findings derived from energetics, genetics and physiology that provide a more advanced understanding of color perception in mammals. Energetics of cis–trans isomerization of 11-cis-retinal accounts for color perception in the narrow region of the electromagnetic spectrum and how human eyes can absorb light in the near infrared (IR) range. Structural homology models of visual pigments reveal complex interactions of the protein moieties with the light sensitive chromophore 11-cis-retinal and that certain color blinding mutations impair secondary structural elements of these G protein-coupled receptors (GPCRs). Finally, we identify unsolved critical aspects of color tuning that require future investigation. PMID:26187035
A Synthetic Vision Preliminary Integrated Safety Analysis
NASA Technical Reports Server (NTRS)
Hemm, Robert; Houser, Scott
2001-01-01
This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.
Neuro-ophthalmic manifestations of cerebrovascular accidents.
Ghannam, Alaa S Bou; Subramanian, Prem S
2017-11-01
Ocular functions can be affected in almost any type of cerebrovascular accident (CVA) creating a burden on the patient and family and limiting functionality. The present review summarizes the different ocular outcomes after stroke, divided into three categories: vision, ocular motility, and visual perception. We also discuss interventions that have been proposed to help restore vision and perception after CVA. Interventions that might help expand or compensate for visual field loss and visuospatial neglect include explorative saccade training, prisms, visual restoration therapy (VRT), and transcranial direct current stimulation (tDCS). VRT makes use of neuroplasticity, which has shown efficacy in animal models but remains controversial in human studies. CVAs can lead to decreased visual acuity, visual field loss, ocular motility abnormalities, and visuospatial perception deficits. Although ocular motility problems can be corrected with surgery, vision, and perception deficits are more difficult to overcome. Interventions to restore or compensate for visual field deficits are controversial despite theoretical underpinnings, animal model evidence, and case reports of their efficacies.
Quantitative systems toxicology
Bloomingdale, Peter; Housand, Conrad; Apgar, Joshua F.; Millard, Bjorn L.; Mager, Donald E.; Burke, John M.; Shah, Dhaval K.
2017-01-01
The overarching goal of modern drug development is to optimize therapeutic benefits while minimizing adverse effects. However, inadequate efficacy and safety concerns remain to be the major causes of drug attrition in clinical development. For the past 80 years, toxicity testing has consisted of evaluating the adverse effects of drugs in animals to predict human health risks. The U.S. Environmental Protection Agency recognized the need to develop innovative toxicity testing strategies and asked the National Research Council to develop a long-range vision and strategy for toxicity testing in the 21st century. The vision aims to reduce the use of animals and drug development costs through the integration of computational modeling and in vitro experimental methods that evaluates the perturbation of toxicity-related pathways. Towards this vision, collaborative quantitative systems pharmacology and toxicology modeling endeavors (QSP/QST) have been initiated amongst numerous organizations worldwide. In this article, we discuss how quantitative structure-activity relationship (QSAR), network-based, and pharmacokinetic/pharmacodynamic modeling approaches can be integrated into the framework of QST models. Additionally, we review the application of QST models to predict cardiotoxicity and hepatotoxicity of drugs throughout their development. Cell and organ specific QST models are likely to become an essential component of modern toxicity testing, and provides a solid foundation towards determining individualized therapeutic windows to improve patient safety. PMID:29308440
Interdisciplinary multisensory fusion: design lessons from professional architects
NASA Astrophysics Data System (ADS)
Geiger, Ray W.; Snell, J. T.
1992-11-01
Psychocybernetic systems engineering design conceptualization is mimicking the evolutionary path of habitable environmental design and the professional practice of building architecture, construction, and facilities management. Human efficacy for innovation in architectural design has always reflected more the projected perceptual vision of the designer visa vis the hierarchical spirit of the design process. In pursuing better ways to build and/or design things, we have found surprising success in exploring certain more esoteric applications. One of those applications is the vision of an artistic approach in/and around creative problem solving. Our evaluation in research into vision and visual systems associated with environmental design and human factors has led us to discover very specific connections between the human spirit and quality design. We would like to share those very qualitative and quantitative parameters of engineering design, particularly as it relates to multi-faceted and interdisciplinary design practice. Discussion will cover areas of cognitive ergonomics, natural modeling sources, and an open architectural process of means and goal satisfaction, qualified by natural repetition, gradation, rhythm, contrast, balance, and integrity of process. One hypothesis is that the kinematic simulation of perceived connections between hard and soft sciences, centering on the life sciences and life in general, has become a very effective foundation for design theory and application.
Human Operator Interface with FLIR Displays.
1980-03-01
model (Ratches, et al., 1976) used to evaluate FUIR system performanmce. SECURITY CLASSIFICATION OF THIS PAOE(When Does Bntoff. PREFACE The research...the minimum resolv- able temperature (MRT) paradigm to test two modeled FLIR systems. Twelve male subjects with 20/20 uncorrected vision served as...varying iv levels of size, contrast, noise, and MTF. The test results were compared with the NVL predictive model (Ratches, et al., 1975) used to
A Return to the Human in Humanism: A Response to Hansen's Humanistic Vision
ERIC Educational Resources Information Center
Lemberger, Matthew E.
2012-01-01
In his extension of the humanistic vision, Hansen (2012) recommends that counseling practitioners and scholars adopt operations that are consistent with his definition of a multiple-perspective philosophy. Alternatively, the author of this article believes that Hansen has reduced the capacity of the human to interpret meaning through quantitative…
Spatial-frequency cutoff requirements for pattern recognition in central and peripheral vision
Kwon, MiYoung; Legge, Gordon E.
2011-01-01
It is well known that object recognition requires spatial frequencies exceeding some critical cutoff value. People with central scotomas who rely on peripheral vision have substantial difficulty with reading and face recognition. Deficiencies of pattern recognition in peripheral vision, might result in higher cutoff requirements, and may contribute to the functional problems of people with central-field loss. Here we asked about differences in spatial-cutoff requirements in central and peripheral vision for letter and face recognition. The stimuli were the 26 letters of the English alphabet and 26 celebrity faces. Each image was blurred using a low-pass filter in the spatial frequency domain. Critical cutoffs (defined as the minimum low-pass filter cutoff yielding 80% accuracy) were obtained by measuring recognition accuracy as a function of cutoff (in cycles per object). Our data showed that critical cutoffs increased from central to peripheral vision by 20% for letter recognition and by 50% for face recognition. We asked whether these differences could be accounted for by central/peripheral differences in the contrast sensitivity function (CSF). We addressed this question by implementing an ideal-observer model which incorporates empirical CSF measurements and tested the model on letter and face recognition. The success of the model indicates that central/peripheral differences in the cutoff requirements for letter and face recognition can be accounted for by the information content of the stimulus limited by the shape of the human CSF, combined with a source of internal noise and followed by an optimal decision rule. PMID:21854800
NASA Astrophysics Data System (ADS)
Labin, Amichai M.; Safuri, Shadi K.; Ribak, Erez N.; Perlman, Ido
2014-07-01
Vision starts with the absorption of light by the retinal photoreceptors—cones and rods. However, due to the ‘inverted’ structure of the retina, the incident light must propagate through reflecting and scattering cellular layers before reaching the photoreceptors. It has been recently suggested that Müller cells function as optical fibres in the retina, transferring light illuminating the retinal surface onto the cone photoreceptors. Here we show that Müller cells are wavelength-dependent wave-guides, concentrating the green-red part of the visible spectrum onto cones and allowing the blue-purple part to leak onto nearby rods. This phenomenon is observed in the isolated retina and explained by a computational model, for the guinea pig and the human parafoveal retina. Therefore, light propagation by Müller cells through the retina can be considered as an integral part of the first step in the visual process, increasing photon absorption by cones while minimally affecting rod-mediated vision.
Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.
2017-01-01
Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95–98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management. PMID:28338047
NASA Astrophysics Data System (ADS)
Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.
2017-03-01
Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95-98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management.
The Ecosystem Services Research Program of the EPA Office of Research and Development envisions a comprehensive theory and practice for characterizing, quantifying and valuing ecosystem services and their relationship to human well-being. This vision of future environmental deci...
The Urban Mission: Linking Fresno State and the Community
ERIC Educational Resources Information Center
Culver-Dockins, Natalie; McCarthy, Mary Ann; Brogan, Amy; Karsevar, Kent; Tatsumura, Janell; Whyte, Jenny; Woods, R. Sandie
2011-01-01
The "four spheres" model of transformation, as viewed through the lens of the urban mission of California State University, Fresno, is examined through current projects in economic development, infrastructure development, human development, and the fourth sphere, which encompasses the broad vision. Local projects will be highlighted.
Old and new results about single-photon sensitivity in human vision
NASA Astrophysics Data System (ADS)
Nelson, Philip C.
2016-04-01
It is sometimes said that ‘our eyes can see single photons’. This article begins by finding a more precise version of that claim and reviewing evidence gathered for it up to around 1985 in two distinct realms, those of human psychophysics and single-cell physiology. Finding a single framework that accommodates both kinds of result is then a nontrivial challenge, and one that sets severe quantitative constraints on any model of dim-light visual processing. This article presents one such model and compares it to a recent experiment.
Assessing commercial feasibility: a practical and ethical prerequisite for human clinical testing.
Kirk, Dwayne D; Robert, Jason Scott
2005-01-01
This article proposes that an assessment of commercial feasibility should be integrated as a prerequisite for human clinical testing to improve the quality and relevance of materials being investigated, as an ethical aspect for human subject protection, and as a means of improving accountability where clinical development is funded on promises of successful translational research. A commercial feasibility analysis is not currently required to justify human clinical testing, but is assumed to have been conducted by industry participants, and use of public funds for clinical trials should be defensible in the same manner. Plant-made vaccines (PMVs) are offered in this discussion as a model for evaluating the relevance of commercial feasibility before human clinical testing. PMVs have been proposed as a potential solution for global health, based on a vision of immunizing the world against many infectious diseases. Such a vision depends on translating current knowledge in plant science and immunology into a potent vaccine that can be readily manufactured and distributed to those in need. But new biologics such as PMVs may fail to be manufactured due to financial or logistical reasons--particularly for orphan diseases without sufficient revenue incentive for industry investment--regardless of the effectiveness which might be demonstrated in human clinical testing. Moreover, all potential instruments of global health depend on translational agents well beyond the lab in order to reach those in need. A model compromising five criteria for commercial feasibility is suggested for inclusion by regulators and ethics review boards as part of the review process prior to approval of human clinical testing. Use of this model may help to facilitate safe and appropriate translational research and bring more immediate benefits to those in need.
Box spaces in pictorial space: linear perspective versus templates
NASA Astrophysics Data System (ADS)
de Ridder, Huib; Pont, Sylvia C.
2012-03-01
In the past decades perceptual (or perceived) image quality has been one of the most important criteria for evaluating digitally processed image and video content. With the growing popularity of new media like stereoscopic displays there is a tendency to replace image quality with viewing experience as the ultimate criterion. Adopting such a high-level psychological criterion calls for a rethinking of the premises underlying human judgment. One premise is that perception is about accurately reconstructing the physical world in front of you ("inverse optics"). That is, human vision is striving for veridicality. The present study investigated one of its consequences, namely, that linear perspective will always yield the correct description of the perceived 3D geometry in 2D images. To this end, human observers adjusted the frontal view of a wireframe box on a television screen so as to look equally deep and wide (i.e. to look like a cube) or twice as deep as wide. In a number of stimulus configurations, the results showed huge deviations from veridicality suggesting that the inverse optics model fails. Instead, the results seem to be more in line with a model of "vision as optical interface".
Advances in understanding the molecular basis of the first steps in color vision.
Hofmann, Lukas; Palczewski, Krzysztof
2015-11-01
Serving as one of our primary environmental inputs, vision is the most sophisticated sensory system in humans. Here, we present recent findings derived from energetics, genetics and physiology that provide a more advanced understanding of color perception in mammals. Energetics of cis-trans isomerization of 11-cis-retinal accounts for color perception in the narrow region of the electromagnetic spectrum and how human eyes can absorb light in the near infrared (IR) range. Structural homology models of visual pigments reveal complex interactions of the protein moieties with the light sensitive chromophore 11-cis-retinal and that certain color blinding mutations impair secondary structural elements of these G protein-coupled receptors (GPCRs). Finally, we identify unsolved critical aspects of color tuning that require future investigation. Copyright © 2015. Published by Elsevier Ltd.
Blake, Randolph; Wilson, Hugh
2010-01-01
This essay reviews major developments –empirical and theoretical –in the field of binocular vision during the last 25 years. We limit our survey primarily to work on human stereopsis, binocular rivalry and binocular contrast summation, with discussion where relevant of single-unit neurophysiology and human brain imaging. We identify several key controversies that have stimulated important work on these problems. In the case of stereopsis those controversies include position versus phase encoding of disparity, dependence of disparity limits on spatial scale, role of occlusion in binocular depth and surface perception, and motion in 3D. In the case of binocular rivalry, controversies include eye versus stimulus rivalry, role of “top-down” influences on rivalry dynamics, and the interaction of binocular rivalry and stereopsis. Concerning binocular contrast summation, the essay focuses on two representative models that highlight the evolving complexity in this field of study. PMID:20951722
FReD: The Floral Reflectance Database — A Web Portal for Analyses of Flower Colour
Savolainen, Vincent; McOwan, Peter W.; Chittka, Lars
2010-01-01
Background Flower colour is of great importance in various fields relating to floral biology and pollinator behaviour. However, subjective human judgements of flower colour may be inaccurate and are irrelevant to the ecology and vision of the flower's pollinators. For precise, detailed information about the colours of flowers, a full reflectance spectrum for the flower of interest should be used rather than relying on such human assessments. Methodology/Principal Findings The Floral Reflectance Database (FReD) has been developed to make an extensive collection of such data available to researchers. It is freely available at http://www.reflectance.co.uk. The database allows users to download spectral reflectance data for flower species collected from all over the world. These could, for example, be used in modelling interactions between pollinator vision and plant signals, or analyses of flower colours in various habitats. The database contains functions for calculating flower colour loci according to widely-used models of bee colour space, reflectance graphs of the spectra and an option to search for flowers with similar colours in bee colour space. Conclusions/Significance The Floral Reflectance Database is a valuable new tool for researchers interested in the colours of flowers and their association with pollinator colour vision, containing raw spectral reflectance data for a large number of flower species. PMID:21170326
Maria Montessori's Cosmic Vision, Cosmic Plan, and Cosmic Education
ERIC Educational Resources Information Center
Grazzini, Camillo
2013-01-01
This classic position of the breadth of Cosmic Education begins with a way of seeing the human's interaction with the world, continues on to the grandeur in scale of time and space of that vision, then brings the interdependency of life where each growing human becomes a participating adult. Mr. Grazzini confronts the laws of human nature in…
Donnelly, William
2008-11-01
To present a commercially available software tool for creating eye models to assist the development of ophthalmic optics and instrumentation, simulate ailments or surgery-induced changes, explore vision research questions, and provide assistance to clinicians in planning treatment or analyzing clinical outcomes. A commercially available eye modeling system was developed, the Advanced Human Eye Model (AHEM). Two mainstream optical software engines, ZEMAX (ZEMAX Development Corp) and ASAP (Breault Research Organization), were used to construct a similar software eye model and compared. The method of using the AHEM is described and various eye modeling scenarios are created. These scenarios consist of retinal imaging of targets and sources; optimization capability; spectacles, contact lens, and intraocular lens insertion and correction; Zernike surface deformation on the cornea; cataract simulation and scattering; a gradient index lens; a binocular mode; a retinal implant; system import/export; and ray path exploration. Similarity of the two different optical software engines showed validity to the mechanism of the AHEM. Metrics and graphical data are generated from the various modeling scenarios particular to their input specifications. The AHEM is a user-friendly commercially available software tool from Breault Research Organization, which can assist the design of ophthalmic optics and instrumentation, simulate ailments or refractive surgery-induced changes, answer vision research questions, or assist clinicians in planning treatment or analyzing clinical outcomes.
Chang, Bo
2016-01-01
Leber's congenital amaurosis (LCA) is an inherited retinal degenerative disease characterized by severe loss of vision in the first year of life. In addition to early vision loss, a variety of other eye-related abnormalities including roving eye movements, deep-set eyes, and sensitivity to bright light also occur with this disease. Many animal models of LCA are available and the study them has led to a better understanding of the pathology of the disease, and has led to the development of therapeutic strategies aimed at curing or slowing down LCA. Mouse models, with their well-developed genetics and similarity to human physiology and anatomy, serve as powerful tools with which to investigate the etiology of human LCA. Such mice provide reproducible, experimental systems for elucidating pathways of normal development, function, designing strategies and testing compounds for translational research and gene-based therapies aimed at delaying the diseases progression. In this chapter, I describe tools used in the discovery and evaluation of mouse models of LCA including a Phoenix Image-Guided Optical Coherence Tomography (OCT) and a Diagnosys Espion Visual Electrophysiology System. Three mouse models are described, the rd3 mouse model for LCA12 and LCA1, the rd12 mouse model for LCA2, and the rd16 mouse model for LCA10.
An ’Active Vision’ Computational Model of Visual Search for Human-Computer Interaction
2009-01-01
semantically related (e.g. cashew , peanut, almond) or randomly grouped (e.g. elm, eraser, potato). Groups were either labeled or not. In some...colored region were further semantically related (e.g. nuts with candy, and clothing with cosmetics). Layouts always contained 28 eight groups with
The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1...
The Pepsi Challenge: Building a Leader-Driven Organization.
ERIC Educational Resources Information Center
Tichy, Noel M.; DeRose, Christopher
1996-01-01
PepsiCo's change-leadership model starts with a teachable point of view, showing trainees how to think in different terms, develop a point of view, test it, crystallize the vision, and implement it. The human resources department plays an important role in articulating the point of view. (SK)
Stupid Tutoring Systems, Intelligent Humans
ERIC Educational Resources Information Center
Baker, Ryan S.
2016-01-01
The initial vision for intelligent tutoring systems involved powerful, multi-faceted systems that would leverage rich models of students and pedagogies to create complex learning interactions. But the intelligent tutoring systems used at scale today are much simpler. In this article, I present hypotheses on the factors underlying this development,…
Spatial Resolution, Grayscale, and Error Diffusion Trade-offs: Impact on Display System Design
NASA Technical Reports Server (NTRS)
Gille, Jennifer L. (Principal Investigator)
1996-01-01
We examine technology trade-offs related to grayscale resolution, spatial resolution, and error diffusion for tessellated display systems. We present new empirical results from our psychophysical study of these trade-offs and compare them to the predictions of a model of human vision.
An amodal shared resource model of language-mediated visual attention
Smith, Alastair C.; Monaghan, Padraic; Huettig, Falk
2013-01-01
Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. PMID:23966967
Baker, Daniel H; Meese, Tim S
2016-07-27
Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50-100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures.
Baker, Daniel H.; Meese, Tim S.
2016-01-01
Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50–100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures. PMID:27460430
Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffith, Douglas; Greitzer, Frank L.
In his 1960 paper Man-Machine Symbiosis, Licklider predicted that human brains and computing machines will be coupled in a tight partnership that will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today. Today we are on the threshold of resurrecting the vision of symbiosis. While Licklider’s original vision suggested a co-equal relationship, here we discuss an updated vision, neo-symbiosis, in which the human holds a superordinate position in an intelligent human-computer collaborative environment. This paper was originally published as a journal article and is being publishedmore » as a chapter in an upcoming book series, Advances in Novel Approaches in Cognitive Informatics and Natural Intelligence.« less
Genetics Home Reference: color vision deficiency
... PROTAN SERIES TRITANOPIA Sources for This Page Deeb SS. Molecular genetics of color-vision deficiencies. Vis Neurosci. 2004 May-Jun;21(3):191-6. Review. Citation on PubMed Deeb SS. The molecular basis of variation in human color vision. Clin ...
Compact and wide-field-of-view head-mounted display
NASA Astrophysics Data System (ADS)
Uchiyama, Shoichi; Kamakura, Hiroshi; Karasawa, Joji; Sakaguchi, Masafumi; Furihata, Takeshi; Itoh, Yoshitaka
1997-05-01
A compact and wide field of view HMD having 1.32-in full color VGA poly-Si TFT LCDs and simple eyepieces much like LEEP optics has been developed. The total field of view is 80 deg with a 40 deg overlap in its central area. Each optical unit which includes an LCD and eyepiece is 46 mm in diameter and 42 mm in length. The total number of pixels is equivalent to (864 times 3) times 480. This HMD realizes its wide field of view and compact size by having a narrower binocular area (overlap area) than that of commercialized HMDs. For this reason, it is expected that the frequency of monocular vision will be more than that of commercialized HMDs and human natural vision. Therefore, we researched the convergent state of eyes while observing the monocular areas of this HMD by employing an EOG and considered the suitability of this HMD to human vision. As a result, it was found that the convergent state of the monocular vision was nearly equal to that of binocular vision. That is, it can be said that this HMD has the possibility of being well suited to human vision in terms of the convergence.
Human olfaction: a constant state of change-blindness
Sela, Lee
2010-01-01
Paradoxically, although humans have a superb sense of smell, they don’t trust their nose. Furthermore, although human odorant detection thresholds are very low, only unusually high odorant concentrations spontaneously shift our attention to olfaction. Here we suggest that this lack of olfactory awareness reflects the nature of olfactory attention that is shaped by the spatial and temporal envelopes of olfaction. Regarding the spatial envelope, selective attention is allocated in space. Humans direct an attentional spotlight within spatial coordinates in both vision and audition. Human olfactory spatial abilities are minimal. Thus, with no olfactory space, there is no arena for olfactory selective attention. Regarding the temporal envelope, whereas vision and audition consist of nearly continuous input, olfactory input is discreet, made of sniffs widely separated in time. If similar temporal breaks are artificially introduced to vision and audition, they induce “change blindness”, a loss of attentional capture that results in a lack of awareness to change. Whereas “change blindness” is an aberration of vision and audition, the long inter-sniff-interval renders “change anosmia” the norm in human olfaction. Therefore, attentional capture in olfaction is minimal, as is human olfactory awareness. All this, however, does not diminish the role of olfaction through sub-attentive mechanisms allowing subliminal smells a profound influence on human behavior and perception. PMID:20603708
Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.
2017-01-01
Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382
Spatial imaging in color and HDR: prometheus unchained
NASA Astrophysics Data System (ADS)
McCann, John J.
2013-03-01
The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.
Predicting Visual Disability in Glaucoma With Combinations of Vision Measures.
Lin, Stephanie; Mihailovic, Aleksandra; West, Sheila K; Johnson, Chris A; Friedman, David S; Kong, Xiangrong; Ramulu, Pradeep Y
2018-04-01
We characterized vision in glaucoma using seven visual measures, with the goals of determining the dimensionality of vision, and how many and which visual measures best model activity limitation. We analyzed cross-sectional data from 150 older adults with glaucoma, collecting seven visual measures: integrated visual field (VF) sensitivity, visual acuity, contrast sensitivity (CS), area under the log CS function, color vision, stereoacuity, and visual acuity with noise. Principal component analysis was used to examine the dimensionality of vision. Multivariable regression models using one, two, or three vision tests (and nonvisual predictors) were compared to determine which was best associated with Rasch-analyzed Glaucoma Quality of Life-15 (GQL-15) person measure scores. The participants had a mean age of 70.2 and IVF sensitivity of 26.6 dB, suggesting mild-to-moderate glaucoma. All seven vision measures loaded similarly onto the first principal component (eigenvectors, 0.220-0.442), which explained 56.9% of the variance in vision scores. In models for GQL scores, the maximum adjusted- R 2 values obtained were 0.263, 0.296, and 0.301 when using one, two, and three vision tests in the models, respectively, though several models in each category had similar adjusted- R 2 values. All three of the best-performing models contained CS. Vision in glaucoma is a multidimensional construct that can be described by several variably-correlated vision measures. Measuring more than two vision tests does not substantially improve models for activity limitation. A sufficient description of disability in glaucoma can be obtained using one to two vision tests, especially VF and CS.
Model-based video segmentation for vision-augmented interactive games
NASA Astrophysics Data System (ADS)
Liu, Lurng-Kuo
2000-04-01
This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.
Art, Illusion and the Visual System.
ERIC Educational Resources Information Center
Livingstone, Margaret S.
1988-01-01
Describes the three part system of human vision. Explores the anatomical arrangement of the vision system from the eyes to the brain. Traces the path of various visual signals to their interpretations by the brain. Discusses human visual perception and its implications in art and design. (CW)
NASA Technical Reports Server (NTRS)
Hart, Sandra G.
1988-01-01
The state-of-the-art helicopter and its pilot are examined using the tools of human-factors analysis. The significant role of human error in helicopter accidents is discussed; the history of human-factors research on helicopters is briefly traced; the typical flight tasks are described; and the noise, vibration, and temperature conditions typical of modern military helicopters are characterized. Also considered are helicopter controls, cockpit instruments and displays, and the impact of cockpit design on pilot workload. Particular attention is given to possible advanced-technology improvements, such as control stabilization and augmentation, FBW and fly-by-light systems, multifunction displays, night-vision goggles, pilot night-vision systems, night-vision displays with superimposed symbols, target acquisition and designation systems, and aural displays. Diagrams, drawings, and photographs are provided.
The role of vision processing in prosthetic vision.
Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette
2012-01-01
Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.
Owls see in stereo much like humans do.
van der Willigen, Robert F
2011-06-10
While 3D experiences through binocular disparity sensitivity have acquired special status in the understanding of human stereo vision, much remains to be learned about how binocularity is put to use in animals. The owl provides an exceptional model to study stereo vision as it displays one of the highest degrees of binocular specialization throughout the animal kingdom. In a series of six behavioral experiments, equivalent to hallmark human psychophysical studies, I compiled an extensive body of stereo performance data from two trained owls. Computer-generated, binocular random-dot patterns were used to ensure pure stereo performance measurements. In all cases, I found that owls perform much like humans do, viz.: (1) disparity alone can evoke figure-ground segmentation; (2) selective use of "relative" rather than "absolute" disparity; (3) hyperacute sensitivity; (4) disparity processing allows for the avoidance of monocular feature detection prior to object recognition; (5) large binocular disparities are not tolerated; (6) disparity guides the perceptual organization of 2D shape. The robustness and very nature of these binocular disparity-based perceptual phenomena bear out that owls, like humans, exploit the third dimension to facilitate early figure-ground segmentation of tangible objects.
van den Berg, Ronald; Roerdink, Jos B. T. M.; Cornelissen, Frans W.
2010-01-01
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality. PMID:20098499
Feedback and feedforward adaptation to visuomotor delay during reaching and slicing movements.
Botzer, Lior; Karniel, Amir
2013-07-01
It has been suggested that the brain and in particular the cerebellum and motor cortex adapt to represent the environment during reaching movements under various visuomotor perturbations. It is well known that significant delay is present in neural conductance and processing; however, the possible representation of delay and adaptation to delayed visual feedback has been largely overlooked. Here we investigated the control of reaching movements in human subjects during an imposed visuomotor delay in a virtual reality environment. In the first experiment, when visual feedback was unexpectedly delayed, the hand movement overshot the end-point target, indicating a vision-based feedback control. Over the ensuing trials, movements gradually adapted and became accurate. When the delay was removed unexpectedly, movements systematically undershot the target, demonstrating that adaptation occurred within the vision-based feedback control mechanism. In a second experiment designed to broaden our understanding of the underlying mechanisms, we revealed similar after-effects for rhythmic reversal (out-and-back) movements. We present a computational model accounting for these results based on two adapted forward models, each tuned for a specific modality delay (proprioception or vision), and a third feedforward controller. The computational model, along with the experimental results, refutes delay representation in a pure forward vision-based predictor and suggests that adaptation occurred in the forward vision-based predictor, and concurrently in the state-based feedforward controller. Understanding how the brain compensates for conductance and processing delays is essential for understanding certain impairments concerning these neural delays as well as for the development of brain-machine interfaces. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
A vision for modernizing environmental risk assessment
In 2007, the US National Research Council (NRC) published a Vision and Strategy for [human health] Toxicity Testing in the 21st century. Central to the vision was increased reliance on high throughput in vitro testing and predictive approaches based on mechanistic understanding o...
Newell, Ben R
2005-01-01
The appeal of simple algorithms that take account of both the constraints of human cognitive capacity and the structure of environments has been an enduring theme in cognitive science. A novel version of such a boundedly rational perspective views the mind as containing an 'adaptive toolbox' of specialized cognitive heuristics suited to different problems. Although intuitively appealing, when this version was proposed, empirical evidence for the use of such heuristics was scant. I argue that in the light of empirical studies carried out since then, it is time this 'vision of rationality' was revised. An alternative view based on integrative models rather than collections of heuristics is proposed.
Head-Mounted Display Technology for Low Vision Rehabilitation and Vision Enhancement
Ehrlich, Joshua R.; Ojeda, Lauro V.; Wicker, Donna; Day, Sherry; Howson, Ashley; Lakshminarayanan, Vasudevan; Moroi, Sayoko E.
2017-01-01
Purpose To describe the various types of head-mounted display technology, their optical and human factors considerations, and their potential for use in low vision rehabilitation and vision enhancement. Design Expert perspective. Methods An overview of head-mounted display technology by an interdisciplinary team of experts drawing on key literature in the field. Results Head-mounted display technologies can be classified based on their display type and optical design. See-through displays such as retinal projection devices have the greatest potential for use as low vision aids. Devices vary by their relationship to the user’s eyes, field of view, illumination, resolution, color, stereopsis, effect on head motion and user interface. These optical and human factors considerations are important when selecting head-mounted displays for specific applications and patient groups. Conclusions Head-mounted display technologies may offer advantages over conventional low vision aids. Future research should compare head-mounted displays to commonly prescribed low vision aids in order to compare their effectiveness in addressing the impairments and rehabilitation goals of diverse patient populations. PMID:28048975
Parallel inputs to memory in bee colour vision.
Horridge, Adrian
2016-03-01
In the 19(th) century, it was found that attraction of bees to light was controlled by light intensity irrespective of colour, and a few critical entomologists inferred that vision of bees foraging on flowers was unlike human colour vision. Therefore, quite justly, Professor Carl von Hess concluded in his book on the Comparative Physiology of Vision (1912) that bees do not distinguish colours in the way that humans enjoy. Immediately, Karl von Frisch, an assistant in the Zoology Department of the same University of Münich, set to work to show that indeed bees have colour vision like humans, thereby initiating a new research tradition, and setting off a decade of controversy that ended only at the death of Hess in 1923. Until 1939, several researchers continued the tradition of trying to untangle the mechanism of bee vision by repeatedly testing trained bees, but made little progress, partly because von Frisch and his legacy dominated the scene. The theory of trichromatic colour vision further developed after three types of receptors sensitive to green, blue, and ultraviolet (UV), were demonstrated in 1964 in the bee. Then, until the end of the century, all data was interpreted in terms of trichromatic colour space. Anomalies were nothing new, but eventually after 1996 they led to the discovery that bees have a previously unknown type of colour vision based on a monochromatic measure and distribution of blue and measures of modulation in green and blue receptor pathways. Meanwhile, in the 20(th) century, search for a suitable rationalization, and explorations of sterile culs-de-sac had filled the literature of bee colour vision, but were based on the wrong theory.
Online Graph Completion: Multivariate Signal Recovery in Computer Vision.
Kim, Won Hwa; Jalal, Mona; Hwang, Seongjae; Johnson, Sterling C; Singh, Vikas
2017-07-01
The adoption of "human-in-the-loop" paradigms in computer vision and machine learning is leading to various applications where the actual data acquisition (e.g., human supervision) and the underlying inference algorithms are closely interwined. While classical work in active learning provides effective solutions when the learning module involves classification and regression tasks, many practical issues such as partially observed measurements, financial constraints and even additional distributional or structural aspects of the data typically fall outside the scope of this treatment. For instance, with sequential acquisition of partial measurements of data that manifest as a matrix (or tensor), novel strategies for completion (or collaborative filtering) of the remaining entries have only been studied recently. Motivated by vision problems where we seek to annotate a large dataset of images via a crowdsourced platform or alternatively, complement results from a state-of-the-art object detector using human feedback, we study the "completion" problem defined on graphs, where requests for additional measurements must be made sequentially. We design the optimization model in the Fourier domain of the graph describing how ideas based on adaptive submodularity provide algorithms that work well in practice. On a large set of images collected from Imgur, we see promising results on images that are otherwise difficult to categorize. We also show applications to an experimental design problem in neuroimaging.
GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa
2004-01-01
The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.
Emergence of a rehabilitation medicine model for low vision service delivery, policy, and funding.
Stelmack, Joan
2005-05-01
A rehabilitation medicine model for low vision rehabilitation is emerging. There have been many challenges to reaching consensus on the roles of each discipline (optometry, ophthalmology, occupational therapy, and vision rehabilitation professionals) in the service delivery model and finding a place in the reimbursement system for all the providers. The history of low vision, legislation associated with Centers for Medicare and Medicaid Services coverage for vision rehabilitation, and research on the effectiveness of low vision service delivery are reviewed. Vision rehabilitation is now covered by Medicare under Physical Medicine and Rehabilitation codes by some Medicare carriers, yet reimbursement is not available for low vision devices or refraction. Also, the role of vision rehabilitation professionals (rehabilitation teachers, orientation and mobility specialists, and low vision therapists) in the model needs to be determined. In a recent systematic review of the scientific literature on the effectiveness of low vision services contracted by the Agency for Health Care Quality Research, no clinical trials were found. The literature consists primarily of longitudinal case studies, which provide weak support for third-party funding for vision rehabilitative services. Providers need to reach consensus on medical necessity, treatment plans, and protocols. Research on low vision outcomes is needed to develop an evidence base to guide clinical practice, policy, and funding decisions.
Study of Light Scattering in the Human Eye
NASA Astrophysics Data System (ADS)
Perez, I. Kelly; Bruce, N. C.; Valdos, L. R. Berriel
2008-04-01
In this paper we present a numerical model of the human eye to be used in studies of the scattering of light in different components of the eye's optical system. Different parts of the eye are susceptible to produce scattering for different reasons; age, illness or injury. For example, cataracts can appear in the human lens or injuries or fungi can appear on the cornea. The aim of the study is to relate the backscattered light, which is what doctors measure or detect, to the forward scattered light, which is what affects the patient's vision. We present the model to be used, the raytrace procedure and some preliminary results for the image on the retina without scattering.
Weberized Mumford-Shah Model with Bose-Einstein Photon Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Jianhong, E-mail: jhshen@math.umn.edu; Jung, Yoon-Mo
Human vision works equally well in a large dynamic range of light intensities, from only a few photons to typical midday sunlight. Contributing to such remarkable flexibility is a famous law in perceptual (both visual and aural) psychology and psychophysics known as Weber's Law. The current paper develops a new segmentation model based on the integration of Weber's Law and the celebrated Mumford-Shah segmentation model (Comm. Pure Appl. Math., vol. 42, pp. 577-685, 1989). Explained in detail are issues concerning why the classical Mumford-Shah model lacks light adaptivity, and why its 'weberized' version can more faithfully reflect human vision's superiormore » segmentation capability in a variety of illuminance conditions from dawn to dusk. It is also argued that the popular Gaussian noise model is physically inappropriate for the weberization procedure. As a result, the intrinsic thermal noise of photon ensembles is introduced based on Bose and Einstein's distributions in quantum statistics, which turns out to be compatible with weberization both analytically and computationally. The current paper focuses on both the theory and computation of the weberized Mumford-Shah model with Bose-Einstein noise. In particular, Ambrosio-Tortorelli's {gamma}-convergence approximation theory is adapted (Boll. Un. Mat. Ital. B, vol. 6, pp. 105-123, 1992), and stable numerical algorithms are developed for the associated pair ofnonlinear Euler-Lagrange PDEs.« less
The Ecological Research Program (ERP) of the EPA Office of Research and Development has the vision of a comprehensive theory and practice for characterizing, quantifying, and valuing ecosystem services, and their relationship to human well-being for environmental decision making....
Gene Therapy for Color Blindness.
Hassall, Mark M; Barnard, Alun R; MacLaren, Robert E
2017-12-01
Achromatopsia is a rare congenital cause of vision loss due to isolated cone photoreceptor dysfunction. The most common underlying genetic mutations are autosomal recessive changes in CNGA3 , CNGB3 , GNAT2 , PDE6H , PDE6C , or ATF6 . Animal models of Cnga3 , Cngb3 , and Gnat2 have been rescued using AAV gene therapy; showing partial restoration of cone electrophysiology and integration of this new photopic vision in reflexive and behavioral visual tests. Three gene therapy phase I/II trials are currently being conducted in human patients in the USA, the UK, and Germany. This review details the AAV gene therapy treatments of achromatopsia to date. We also present novel data showing rescue of a Cnga3 -/- mouse model using an rAAV.CBA.CNGA3 vector. We conclude by synthesizing the implications of this animal work for ongoing human trials, particularly, the challenge of restoring integrated cone retinofugal pathways in an adult visual system. The evidence to date suggests that gene therapy for achromatopsia will need to be applied early in childhood to be effective.
ERIC Educational Resources Information Center
Carnie, Fiona, Ed.
Human Scale Education's 1998 conference addressed the creation of schools and learning experiences to foster in young people the attitudes and skills to shape a fairer and more sustainable world. "Values and Vision in Business and Education" (Anita Roddick) argues that educational curricula must contain the language and action of social…
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
2012-01-01
To enhance the quality of the theatre experience, the film industry is interested in achieving higher frame rates for capture and display. In this talk I will describe the basic spatio-temporal sensitivities of human vision, and how they respond to the time sequence of static images that is fundamental to cinematic presentation.
A new generation of intelligent trainable tools for analyzing large scientific image databases
NASA Technical Reports Server (NTRS)
Fayyad, Usama M.; Smyth, Padhraic; Atkinson, David J.
1994-01-01
The focus of this paper is on the detection of natural, as opposed to human-made, objects. The distinction is important because, in the context of image analysis, natural objects tend to possess much greater variability in appearance than human-made objects. Hence, we shall focus primarily on the use of algorithms that 'learn by example' as the basis for image exploration. The 'learn by example' approach is potentially more generally applicable compared to model-based vision methods since domain scientists find it relatively easier to provide examples of what they are searching for versus describing a model.
Bionic Vision-Based Intelligent Power Line Inspection System
Ma, Yunpeng; He, Feijia; Xu, Jinxin
2017-01-01
Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions. PMID:28203269
Adaptive design lessons from professional architects
NASA Astrophysics Data System (ADS)
Geiger, Ray W.; Snell, J. T.
1993-09-01
Psychocybernetic systems engineering design conceptualization is mimicking the evolutionary path of habitable environmental design and the professional practice of building architecture, construction, and facilities management. In pursuing better ways to design cellular automata and qualification classifiers in a design process, we have found surprising success in exploring certain more esoteric approaches, e.g., the vision of interdisciplinary artistic discovery in and around creative problem solving. Our evaluation in research into vision and hybrid sensory systems associated with environmental design and human factors has led us to discover very specific connections between the human spirit and quality design. We would like to share those very qualitative and quantitative parameters of engineering design, particularly as it relates to multi-faceted and future oriented design practice. Discussion covers areas of case- based techniques of cognitive ergonomics, natural modeling sources, and an open architectural process of means/goal satisfaction, qualified by natural repetition, gradation, rhythm, contrast, balance, and integrity of process.
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
Mapping the Perceptual Grain of the Human Retina
Tuten, William S.; Roorda, Austin; Sincich, Lawrence C.
2014-01-01
In humans, experimental access to single sensory receptors is difficult to achieve, yet it is crucial for learning how the signals arising from each receptor are transformed into perception. By combining adaptive optics microstimulation with high-speed eye tracking, we show that retinal function can be probed at the level of the individual cone photoreceptor in living eyes. Classical psychometric functions were obtained from cone-sized microstimuli targeted to single photoreceptors. Revealed psychophysically, the cone mosaic also manifests a variable sensitivity to light across its surface that accords with a simple model of cone light capture. Because this microscopic grain of vision could be detected on the perceptual level, it suggests that photoreceptors can act individually to shape perception, if the normally suboptimal relay of light by the eye's optics is corrected. Thus the precise arrangement of cones and the exact placement of stimuli onto those cones create the initial retinal limits on signals mediating spatial vision. PMID:24741057
The color-vision approach to emotional space: cortical evoked potential data.
Boucsein, W; Schaefer, F; Sokolov, E N; Schröder, C; Furedy, J J
2001-01-01
A framework for accounting for emotional phenomena proposed by Sokolov and Boucsein (2000) employs conceptual dimensions that parallel those of hue, brightness, and saturation in color vision. The approach that employs the concepts of emotional quality. intensity, and saturation has been supported by psychophysical emotional scaling data gathered from a few trained observers. We report cortical evoked potential data obtained during the change between different emotions expressed in schematic faces. Twenty-five subjects (13 male, 12 female) were presented with a positive, a negative, and a neutral computer-generated face with random interstimulus intervals in a within-subjects design, together with four meaningful and four meaningless control stimuli made up from the same elements. Frontal, central, parietal, and temporal ERPs were recorded from each hemisphere. Statistically significant outcomes in the P300 and N200 range support the potential fruitfulness of the proposed color-vision-model-based approach to human emotional space.
Can Humans Fly Action Understanding with Multiple Classes of Actors
2015-06-08
recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank
Bozzani, Fiammetta Maria; Griffiths, Ulla Kou; Blanchet, Karl; Schmidt, Elena
2014-02-28
VISION 2020 is a global initiative launched in 1999 to eliminate avoidable blindness by 2020. The objective of this study was to undertake a situation analysis of the Zambian eye health system and assess VISION 2020 process indicators on human resources, equipment and infrastructure. All eye health care providers were surveyed to determine location, financing sources, human resources and equipment. Key informants were interviewed regarding levels of service provision, management and leadership in the sector. Policy papers were reviewed. A health system dynamics framework was used to analyse findings. During 2011, 74 facilities provided eye care in Zambia; 39% were public, 37% private for-profit and 24% owned by Non-Governmental Organizations. Private facilities were solely located in major cities. A total of 191 people worked in eye care; 18 of these were ophthalmologists and eight cataract surgeons, equivalent to 0.34 and 0.15 per 250,000 population, respectively. VISION 2020 targets for inpatient beds and surgical theatres were met in six out of nine provinces, but human resources and spectacles manufacturing workshops were below target in every province. Inequalities in service provision between urban and rural areas were substantial. Shortage and maldistribution of human resources, lack of routine monitoring and inadequate financing mechanisms are the root causes of underperformance in the Zambian eye health system, which hinder the ability to achieve the VISION 2020 goals. We recommend that all VISION 2020 process indicators are evaluated simultaneously as these are not individually useful for monitoring progress.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1990-01-01
Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.
Gootwine, Elisha; Abu-Siam, Mazen; Obolensky, Alexey; Rosov, Alex; Honig, Hen; Nitzan, Tali; Shirak, Andrey; Ezra-Elia, Raaya; Yamin, Esther; Banin, Eyal; Averbukh, Edward; Hauswirth, William W.; Ofri, Ron; Seroussi, Eyal
2017-01-01
Purpose Applying CNGA3 gene augmentation therapy to cure a novel causative mutation underlying achromatopsia (ACHM) in sheep. Methods Impaired vision that spontaneously appeared in newborn lambs was characterized by behavioral, electroretinographic (ERG), and histologic techniques. Deep-sequencing reads of an affected lamb and an unaffected lamb were compared within conserved genomic regions orthologous to human genes involved in similar visual impairment. Observed nonsynonymous amino acid substitutions were classified by their deleteriousness score. The putative causative mutation was assessed by producing compound CNGA3 heterozygotes and applying gene augmentation therapy using the orthologous human cDNA. Results Behavioral assessment revealed day blindness, and subsequent ERG examination showed attenuated photopic responses. Histologic and immunohistochemical examination of affected sheep eyes did not reveal degeneration, and cone photoreceptors expressing CNGA3 were present. Bioinformatics and sequencing analyses suggested a c.1618G>A, p.Gly540Ser substitution in the GMP-binding domain of CNGA3 as the causative mutation. This was confirmed by genetic concordance test and by genetic complementation experiment: All five compound CNGA3 heterozygotes, carrying both p.Arg236* and p.Gly540Ser mutations in CNGA3, were day-blind. Furthermore, subretinal delivery of the intact human CNGA3 gene using an adeno-associated viral vector (AAV) restored photopic vision in two affected p.Gly540Ser homozygous rams. Conclusions The c.1618G>A, p.Gly540Ser substitution in CNGA3 was identified as the causative mutation for a novel form of ACHM in Awassi sheep. Gene augmentation therapy restored vision in the affected sheep. This novel mutation provides a large-animal model that is valid for most human CNGA3 ACHM patients; the majority of them carry missense rather than premature-termination mutations. PMID:28282490
Spatial vision in older adults: perceptual changes and neural bases.
McKendrick, Allison M; Chan, Yu Man; Nguyen, Bao N
2018-05-17
The number of older adults is rapidly increasing internationally, leading to a significant increase in research on how healthy ageing impacts vision. Most clinical assessments of spatial vision involve simple detection (letter acuity, grating contrast sensitivity, perimetry). However, most natural visual environments are more spatially complicated, requiring contrast discrimination, and the delineation of object boundaries and contours, which are typically present on non-uniform backgrounds. In this review we discuss recent research that reports on the effects of normal ageing on these more complex visual functions, specifically in the context of recent neurophysiological studies. Recent research has concentrated on understanding the effects of healthy ageing on neural responses within the visual pathway in animal models. Such neurophysiological research has led to numerous, subsequently tested, hypotheses regarding the likely impact of healthy human ageing on specific aspects of spatial vision. Healthy normal ageing impacts significantly on spatial visual information processing from the retina through to visual cortex. Some human data validates that obtained from studies of animal physiology, however some findings indicate that rethinking of presumed neural substrates is required. Notably, not all spatial visual processes are altered by age. Healthy normal ageing impacts significantly on some spatial visual processes (in particular centre-surround tasks), but leaves contrast discrimination, contrast adaptation, and orientation discrimination relatively intact. The study of older adult vision contributes to knowledge of the brain mechanisms altered by the ageing process, can provide practical information regarding visual environments that older adults may find challenging, and may lead to new methods of assessing visual performance in clinical environments. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.
Constructing an Educational Mars Simulation
NASA Technical Reports Server (NTRS)
Henke, Stephen A.
2004-01-01
January 14th 2004, President George Bush announces his plans to catalyst the space program into a new era of space exploration and discovery. His vision encompasses a robotics program to explore our solar system, a return to the moon, the human exploration of Mars, and to promote international prosperity towards our endeavors. We at NASA now have the task of constructing this vision in a very real timeframe. I have been chosen to begin phase 1 of making this vision a reality. I will be working on creating an Educational Mars Simulation of human exploration of Mars to stimulate interest and involvement with the project from investors and the community. GRC s Computer Services Division (CSD) in collaboration with the Office of Education Programs will be designing models, constructing terrain, and programming this simulation to create a realistic portrayal of human exploration on mars. With recent and past technological breakthroughs in computing, my primary goal can be accomplished with only the aid of 3-4 software packages. Lightwave 3D is the modeling package we have selected to use for the creation of our digital objects. This includes a Mars pressurized rover, rover cockpit, landscape/terrain, and habitat. Once we have the models completed they need textured so Photoshop and Macromedia Fireworks are handy for bringing these objects to life. Before directly importing all of this data into a simulation environment, it is necessary to first render a stunning animation of the desired final product. This animation with represent what we hope to capture out of the simulation and it will include all of the accessories like ray-tracing, fog effects, shadows, anti-aliasing, particle effects, volumetric lighting, and lens flares. Adobe Premier will more than likely be used for video editing and adding ambient noises and music. Lastly, V-Tree is the real-time 3D graphics engine which will facilitate our realistic simulation. Additional information is included in the original extended abstract.
Bio-inspired approach for intelligent unattended ground sensors
NASA Astrophysics Data System (ADS)
Hueber, Nicolas; Raymond, Pierre; Hennequin, Christophe; Pichler, Alexander; Perrot, Maxime; Voisin, Philippe; Moeglin, Jean-Pierre
2015-05-01
Improving the surveillance capacity over wide zones requires a set of smart battery-powered Unattended Ground Sensors capable of issuing an alarm to a decision-making center. Only high-level information has to be sent when a relevant suspicious situation occurs. In this paper we propose an innovative bio-inspired approach that mimics the human bi-modal vision mechanism and the parallel processing ability of the human brain. The designed prototype exploits two levels of analysis: a low-level panoramic motion analysis, the peripheral vision, and a high-level event-focused analysis, the foveal vision. By tracking moving objects and fusing multiple criteria (size, speed, trajectory, etc.), the peripheral vision module acts as a fast relevant event detector. The foveal vision module focuses on the detected events to extract more detailed features (texture, color, shape, etc.) in order to improve the recognition efficiency. The implemented recognition core is able to acquire human knowledge and to classify in real-time a huge amount of heterogeneous data thanks to its natively parallel hardware structure. This UGS prototype validates our system approach under laboratory tests. The peripheral analysis module demonstrates a low false alarm rate whereas the foveal vision correctly focuses on the detected events. A parallel FPGA implementation of the recognition core succeeds in fulfilling the embedded application requirements. These results are paving the way of future reconfigurable virtual field agents. By locally processing the data and sending only high-level information, their energy requirements and electromagnetic signature are optimized. Moreover, the embedded Artificial Intelligence core enables these bio-inspired systems to recognize and learn new significant events. By duplicating human expertise in potentially hazardous places, our miniature visual event detector will allow early warning and contribute to better human decision making.
Aural-Nondetectability Model Predictions for Night-Vision Goggles across Ambient Lighting Conditions
2015-12-01
ARL-TR-7564 ● DEC 2015 US Army Research Laboratory Aural-Nondetectability Model Predictions for Night -Vision Goggles across...ARL-TR-7564 ● DEC 2015 US Army Research Laboratory Aural-Nondetectability Model Predictions for Night -Vision Goggles across Ambient...May 2015–30 Sep 2015 4. TITLE AND SUBTITLE Aural-Nondetectability Model Predictions for Night -Vision Goggles across Ambient Lighting Conditions 5a
Generation and use of human 3D-CAD models
NASA Astrophysics Data System (ADS)
Grotepass, Juergen; Speyer, Hartmut; Kaiser, Ralf
2002-05-01
Individualized Products are one of the ten mega trends of the 21st Century with human modeling as the key issue for tomorrow's design and product development. The use of human modeling software for computer based ergonomic simulations within the production process increases quality while reducing costs by 30- 50 percent and shortening production time. This presentation focuses on the use of human 3D-CAD models for both, the ergonomic design of working environments and made to measure garment production. Today, the entire production chain can be designed, individualized models generated and analyzed in 3D computer environments. Anthropometric design for ergonomics is matched to human needs, thus preserving health. Ergonomic simulation includes topics as human vision, reachability, kinematics, force and comfort analysis and international design capabilities. In German more than 17 billions of Mark are moved to other industries, because clothes do not fit. Individual clothing tailored to the customer's preference means surplus value, pleasure and perfect fit. The body scanning technology is the key to generation and use of human 3D-CAD models for both, the ergonomic design of working environments and made to measure garment production.
Oudejans, S C C; Schippers, G M; Schramade, M H; Koeter, M W J; van den Brink, W
2011-04-01
To investigate internal consistency and factor structure of a questionnaire measuring learning capacity based on Senge's theory of the five disciplines of a learning organisation: Personal Mastery, Mental Models, Shared Vision, Team Learning, and Systems Thinking. Cross-sectional study. Substance-abuse treatment centres (SATCs) in The Netherlands. A total of 293 SATC employees from outpatient and inpatient treatment departments, financial and human resources departments. Psychometric properties of the Questionnaire for Learning Organizations (QLO), including factor structure, internal consistency, and interscale correlations. A five-factor model representing the five disciplines of Senge showed good fit. The scales for Personal Mastery, Shared Vision and Team Learning had good internal consistency, but the scales for Systems Thinking and Mental Models had low internal consistency. The proposed five-factor structure was confirmed in the QLO, which makes it a promising instrument to assess learning capacity in teams. The Systems Thinking and the Mental Models scales have to be revised. Future research should be aimed at testing criterion and discriminatory validity.
Large-scale functional models of visual cortex for remote sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E
Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simplemore » region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.« less
Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu
2018-06-22
The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on-off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.
NASA Astrophysics Data System (ADS)
Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu
2018-06-01
The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on–off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.
NASA Astrophysics Data System (ADS)
Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil
2018-04-01
Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.
Quality grading of Atlantic salmon (Salmo salar) by computer vision.
Misimi, E; Erikson, U; Skavhaug, A
2008-06-01
In this study, we present a promising method of computer vision-based quality grading of whole Atlantic salmon (Salmo salar). Using computer vision, it was possible to differentiate among different quality grades of Atlantic salmon based on the external geometrical information contained in the fish images. Initially, before the image acquisition, the fish were subjectively graded and labeled into grading classes by a qualified human inspector in the processing plant. Prior to classification, the salmon images were segmented into binary images, and then feature extraction was performed on the geometrical parameters of the fish from the grading classes. The classification algorithm was a threshold-based classifier, which was designed using linear discriminant analysis. The performance of the classifier was tested by using the leave-one-out cross-validation method, and the classification results showed a good agreement between the classification done by human inspectors and by the computer vision. The computer vision-based method classified correctly 90% of the salmon from the data set as compared with the classification by human inspector. Overall, it was shown that computer vision can be used as a powerful tool to grade Atlantic salmon into quality grades in a fast and nondestructive manner by a relatively simple classifier algorithm. The low cost of implementation of today's advanced computer vision solutions makes this method feasible for industrial purposes in fish plants as it can replace manual labor, on which grading tasks still rely.
Model of visual contrast gain control and pattern masking
NASA Technical Reports Server (NTRS)
Watson, A. B.; Solomon, J. A.
1997-01-01
We have implemented a model of contrast gain and control in human vision that incorporates a number of key features, including a contrast sensitivity function, multiple oriented bandpass channels, accelerating nonlinearities, and a devisive inhibitory gain control pool. The parameters of this model have been optimized through a fit to the recent data that describe masking of a Gabor function by cosine and Gabor masks [J. M. Foley, "Human luminance pattern mechanisms: masking experiments require a new model," J. Opt. Soc. Am. A 11, 1710 (1994)]. The model achieves a good fit to the data. We also demonstrate how the concept of recruitment may accommodate a variant of this model in which excitatory and inhibitory paths have a common accelerating nonlinearity, but which include multiple channels tuned to different levels of contrast.
The Road to Certainty and Back.
Westheimer, Gerald
2016-10-14
The author relates his intellectual journey from eye-testing clinician to experimental vision scientist. Starting with the quest for underpinning in physics and physiology of vague clinical propositions and of psychology's acceptance of thresholds as "fuzzy-edged," and a long career pursuing a reductionist agenda in empirical vision science, his journey led to the realization that the full understanding of human vision cannot proceed without factoring in an observer's awareness, with its attendant uncertainty and open-endedness. He finds support in the loss of completeness, finality, and certainty revealed in fundamental twentieth-century formulations of mathematics and physics. Just as biology prospered with the introduction of the emergent, nonreductionist concepts of evolution, vision science has to become comfortable accepting data and receiving guidance from human observers' conscious visual experience.
Mathematical modelling of animate and intentional motion.
Rittscher, Jens; Blake, Andrew; Hoogs, Anthony; Stein, Gees
2003-01-01
Our aim is to enable a machine to observe and interpret the behaviour of others. Mathematical models are employed to describe certain biological motions. The main challenge is to design models that are both tractable and meaningful. In the first part we will describe how computer vision techniques, in particular visual tracking, can be applied to recognize a small vocabulary of human actions in a constrained scenario. Mainly the problems of viewpoint and scale invariance need to be overcome to formalize a general framework. Hence the second part of the article is devoted to the question whether a particular human action should be captured in a single complex model or whether it is more promising to make extensive use of semantic knowledge and a collection of low-level models that encode certain motion primitives. Scene context plays a crucial role if we intend to give a higher-level interpretation rather than a low-level physical description of the observed motion. A semantic knowledge base is used to establish the scene context. This approach consists of three main components: visual analysis, the mapping from vision to language and the search of the semantic database. A small number of robust visual detectors is used to generate a higher-level description of the scene. The approach together with a number of results is presented in the third part of this article. PMID:12689374
Development of a volumetric projection technique for the digital evaluation of field of view.
Marshall, Russell; Summerskill, Stephen; Cook, Sharon
2013-01-01
Current regulations for field of view requirements in road vehicles are defined by 2D areas projected on the ground plane. This paper discusses the development of a new software-based volumetric field of view projection tool and its implementation within an existing digital human modelling system. In addition, the exploitation of this new tool is highlighted through its use in a UK Department for Transport funded research project exploring the current concerns with driver vision. Focusing specifically on rearwards visibility in small and medium passenger vehicles, the volumetric approach is shown to provide a number of distinct advantages. The ability to explore multiple projections of both direct vision (through windows) and indirect vision (through mirrors) provides a greater understanding of the field of view environment afforded to the driver whilst still maintaining compatibility with the 2D projections of the regulatory standards. Field of view requirements for drivers of road vehicles are defined by simplified 2D areas projected onto the ground plane. However, driver vision is a complex 3D problem. This paper presents the development of a new software-based 3D volumetric projection technique and its implementation in the evaluation of driver vision in small- and medium-sized passenger vehicles.
Effective visibility analysis method in virtual geographic environment
NASA Astrophysics Data System (ADS)
Li, Yi; Zhu, Qing; Gong, Jianhua
2008-10-01
Visibility analysis in virtual geographic environment has broad applications in many aspects in social life. But in practical use it is urged to improve the efficiency and accuracy, as well as to consider human vision restriction. The paper firstly introduces a high-efficient 3D data modeling method, which generates and organizes 3D data model using R-tree and LOD techniques. Then a new visibility algorithm which can realize real-time viewshed calculation considering the shelter of DEM and 3D building models and some restrictions of human eye to the viewshed generation. Finally an experiment is conducted to prove the visibility analysis calculation quickly and accurately which can meet the demand of digital city applications.
Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation
Khaligh-Razavi, Seyed-Mahdi; Kriegeskorte, Nikolaus
2014-01-01
Inferior temporal (IT) cortex in human and nonhuman primates serves visual object recognition. Computational object-vision models, although continually improving, do not yet reach human performance. It is unclear to what extent the internal representations of computational models can explain the IT representation. Here we investigate a wide range of computational model representations (37 in total), testing their categorization performance and their ability to account for the IT representational geometry. The models include well-known neuroscientific object-recognition models (e.g. HMAX, VisNet) along with several models from computer vision (e.g. SIFT, GIST, self-similarity features, and a deep convolutional neural network). We compared the representational dissimilarity matrices (RDMs) of the model representations with the RDMs obtained from human IT (measured with fMRI) and monkey IT (measured with cell recording) for the same set of stimuli (not used in training the models). Better performing models were more similar to IT in that they showed greater clustering of representational patterns by category. In addition, better performing models also more strongly resembled IT in terms of their within-category representational dissimilarities. Representational geometries were significantly correlated between IT and many of the models. However, the categorical clustering observed in IT was largely unexplained by the unsupervised models. The deep convolutional network, which was trained by supervision with over a million category-labeled images, reached the highest categorization performance and also best explained IT, although it did not fully explain the IT data. Combining the features of this model with appropriate weights and adding linear combinations that maximize the margin between animate and inanimate objects and between faces and other objects yielded a representation that fully explained our IT data. Overall, our results suggest that explaining IT requires computational features trained through supervised learning to emphasize the behaviorally important categorical divisions prominently reflected in IT. PMID:25375136
Proceedings of the Augmented VIsual Display (AVID) Research Workshop
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)
1993-01-01
The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.
A Computational Model of Active Vision for Visual Search in Human-Computer Interaction
2010-08-01
processors that interact with the production rules to produce behavior, and (c) parameters that constrain the behavior of the model (e.g., the...velocity of a saccadic eye movement). While the parameters can be task-specific, the majority of the parameters are usually fixed across a wide variety...previously estimated durations. Hooge and Erkelens (1996) review these four explanations of fixation duration control. A variety of research
Low Earth Orbit Rendezvous Strategy for Lunar Missions
NASA Technical Reports Server (NTRS)
Cates, Grant R.; Cirillo, William M.; Stromgren, Chel
2006-01-01
On January 14, 2004 President George W. Bush announced a new Vision for Space Exploration calling for NASA to return humans to the moon. In 2005 NASA decided to use a Low Earth Orbit (LEO) rendezvous strategy for the lunar missions. A Discrete Event Simulation (DES) based model of this strategy was constructed. Results of the model were then used for subsequent analysis to explore the ramifications of the LEO rendezvous strategy.
Modeling Intraocular Bacterial Infections
Astley, Roger A.; Coburn, Phillip S.; Parkunan, Salai Madhumathi; Callegan, Michelle C.
2016-01-01
Bacterial endophthalmitis is an infection and inflammation of the posterior segment of the eye which can result in significant loss of visual acuity. Even with prompt antibiotic, anti-inflammatory and surgical intervention, vision and even the eye itself may be lost. For the past century, experimental animal models have been used to examine various aspects of the pathogenesis and pathophysiology of bacterial endophthalmitis, to further the development of anti-inflammatory treatment strategies, and to evaluate the pharmacokinetics and efficacies of antibiotics. Experimental models allow independent control of many parameters of infection and facilitate systematic examination of infection outcomes. While no single animal model perfectly reproduces the human pathology of bacterial endophthalmitis, investigators have successfully used these models to understand the infectious process and the host response, and have provided new information regarding therapeutic options for the treatment of bacterial endophthalmitis. This review highlights experimental animal models of endophthalmitis and correlates this information with the clinical setting. The goal is to identify knowledge gaps that may be addressed in future experimental and clinical studies focused on improvements in the therapeutic preservation of vision during and after this disease. PMID:27154427
Directly Comparing Computer and Human Performance in Language Understanding and Visual Reasoning.
ERIC Educational Resources Information Center
Baker, Eva L.; And Others
Evaluation models are being developed for assessing artificial intelligence (AI) systems in terms of similar performance by groups of people. Natural language understanding and vision systems are the areas of concentration. In simplest terms, the goal is to norm a given natural language system's performance on a sample of people. The specific…
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron
1992-01-01
Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.
The Education Mall: "A 21st Century Learning Concept."
ERIC Educational Resources Information Center
Taylor, Lyndon E.; Maas, Michael L.
Real change in education has been hampered by at least three forces: education's lack of a vision of where society is moving and how education should play a part in this movement; the human makeup of educational institutions; and the need for the development of a new model for financing public postsecondary education. Regardless of these…
Cortical Thought Theory: A Working Model of the Human Gestalt Mechanism.
1985-07-01
time is proportional to KA , where K is some number and N is the number of pertinent pieces of information in the database (or size of ; the input...Functions In Man. Basic Books: New York, 1966. 77. Maffei, Lamberto, and Adriana Fiorentini. "The Visual Cortex As A Spatial Frequency Analyser." Vision
Jacobson, Samuel G.; Cideciyan, Artur V.; Peshenko, Igor V.; Sumaroka, Alexander; Olshevskaya, Elena V.; Cao, Lihui; Schwartz, Sharon B.; Roman, Alejandro J.; Olivares, Melani B.; Sadigh, Sam; Yau, King-Wai; Heon, Elise; Stone, Edwin M.; Dizhoor, Alexander M.
2013-01-01
The GUCY2D gene encodes retinal membrane guanylyl cyclase (RetGC1), a key component of the phototransduction machinery in photoreceptors. Mutations in GUCY2D cause Leber congenital amaurosis type 1 (LCA1), an autosomal recessive human retinal blinding disease. The effects of RetGC1 deficiency on human rod and cone photoreceptor structure and function are currently unknown. To move LCA1 closer to clinical trials, we characterized a cohort of patients (ages 6 months—37 years) with GUCY2D mutations. In vivo analyses of retinal architecture indicated intact rod photoreceptors in all patients but abnormalities in foveal cones. By functional phenotype, there were patients with and those without detectable cone vision. Rod vision could be retained and did not correlate with the extent of cone vision or age. In patients without cone vision, rod vision functioned unsaturated under bright ambient illumination. In vitro analyses of the mutant alleles showed that in addition to the major truncation of the essential catalytic domain in RetGC1, some missense mutations in LCA1 patients result in a severe loss of function by inactivating its catalytic activity and/or ability to interact with the activator proteins, GCAPs. The differences in rod sensitivities among patients were not explained by the biochemical properties of the mutants. However, the RetGC1 mutant alleles with remaining biochemical activity in vitro were associated with retained cone vision in vivo. We postulate a relationship between the level of RetGC1 activity and the degree of cone vision abnormality, and argue for cone function being the efficacy outcome in clinical trials of gene augmentation therapy in LCA1. PMID:23035049
2014-01-01
Background VISION 2020 is a global initiative launched in 1999 to eliminate avoidable blindness by 2020. The objective of this study was to undertake a situation analysis of the Zambian eye health system and assess VISION 2020 process indicators on human resources, equipment and infrastructure. Methods All eye health care providers were surveyed to determine location, financing sources, human resources and equipment. Key informants were interviewed regarding levels of service provision, management and leadership in the sector. Policy papers were reviewed. A health system dynamics framework was used to analyse findings. Results During 2011, 74 facilities provided eye care in Zambia; 39% were public, 37% private for-profit and 24% owned by Non-Governmental Organizations. Private facilities were solely located in major cities. A total of 191 people worked in eye care; 18 of these were ophthalmologists and eight cataract surgeons, equivalent to 0.34 and 0.15 per 250,000 population, respectively. VISION 2020 targets for inpatient beds and surgical theatres were met in six out of nine provinces, but human resources and spectacles manufacturing workshops were below target in every province. Inequalities in service provision between urban and rural areas were substantial. Conclusion Shortage and maldistribution of human resources, lack of routine monitoring and inadequate financing mechanisms are the root causes of underperformance in the Zambian eye health system, which hinder the ability to achieve the VISION 2020 goals. We recommend that all VISION 2020 process indicators are evaluated simultaneously as these are not individually useful for monitoring progress. PMID:24575919
A blind human expert echolocator shows size constancy for objects perceived by echoes.
Milne, Jennifer L; Anello, Mimma; Goodale, Melvyn A; Thaler, Lore
2015-01-01
Some blind humans make clicking noises with their mouth and use the reflected echoes to perceive objects and surfaces. This technique can operate as a crude substitute for vision, allowing human echolocators to perceive silent, distal objects. Here, we tested if echolocation would, like vision, show size constancy. To investigate this, we asked a blind expert echolocator (EE) to echolocate objects of different physical sizes presented at different distances. The EE consistently identified the true physical size of the objects independent of distance. In contrast, blind and blindfolded sighted controls did not show size constancy, even when encouraged to use mouth clicks, claps, or other signals. These findings suggest that size constancy is not a purely visual phenomenon, but that it can operate via an auditory-based substitute for vision, such as human echolocation.
NASA Technical Reports Server (NTRS)
Taylor, J. H.
1973-01-01
Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.
Software Simulates Sight: Flat Panel Mura Detection
NASA Technical Reports Server (NTRS)
2008-01-01
In the increasingly sophisticated world of high-definition flat screen monitors and television screens, image clarity and the elimination of distortion are paramount concerns. As the devices that reproduce images become more and more sophisticated, so do the technologies that verify their accuracy. By simulating the manner in which a human eye perceives and interprets a visual stimulus, NASA scientists have found ways to automatically and accurately test new monitors and displays. The Spatial Standard Observer (SSO) software metric, developed by Dr. Andrew B. Watson at Ames Research Center, measures visibility and defects in screens, displays, and interfaces. In the design of such a software tool, a central challenge is determining which aspects of visual function to include while accuracy and generality are important, relative simplicity of the software module is also a key virtue. Based on data collected in ModelFest, a large cooperative multi-lab project hosted by the Optical Society of America, the SSO simulates a simplified model of human spatial vision, operating on a pair of images that are viewed at a specific viewing distance with pixels having a known relation to luminance. The SSO measures the visibility of foveal spatial patterns, or the discriminability of two patterns, by incorporating only a few essential components of vision. These components include local contrast transformation, a contrast sensitivity function, local masking, and local pooling. By this construction, the SSO provides output in units of "just noticeable differences" (JND) a unit of measure based on the assumed smallest difference of sensory input detectable by a human being. Herein is the truly amazing ability of the SSO, while conventional methods can manipulate images, the SSO models human perception. This set of equations actually defines a mathematical way of working with an image that accurately reflects the way in which the human eye and mind behold a stimulus. The SSO is intended for a wide variety of applications, such as evaluating vision from unmanned aerial vehicles, measuring visibility of damage to aircraft and to the space shuttles, predicting outcomes of corrective laser eye surgery, inspecting displays during the manufacturing process, estimating the quality of compressed digital video, evaluating legibility of text, and predicting discriminability of icons or symbols in a graphical user interface.
ERIC Educational Resources Information Center
Wilkinson, John
2013-01-01
Humans have always had the vision to one day live on other planets. This vision existed even before the first person was put into orbit. Since the early space missions of putting humans into orbit around Earth, many advances have been made in space technology. We have now sent many space probes deep into the Solar system to explore the planets and…
Using "Competing Visions of Human Rights" in an International IB World School
ERIC Educational Resources Information Center
Tolley, William J.
2013-01-01
William Tolley, a teaching fellow with the Choices Program, is the Learning and Innovation Coach and head of history at the International School of Curitiba, Brazil (IB). He writes in this article that he has found that the "Competing Visions of Human Rights" teaching unit, developed by Brown University's Choices Program, provides a…
The Common Vision: Parenting and Educating for Wholeness.
ERIC Educational Resources Information Center
Marshak, David
2003-01-01
Presents a spiritually based view of needs and potentials of children and youth, from birth through age 21, based on works of Rudolf Steiner, Sri Aurobindo Ghose, and Hazrat Inayat Khan. Focuses on their common vision of the true nature of human beings, the course of human growth, and the desired functions of child rearing and education.…
'What' and 'where' in the human brain.
Ungerleider, L G; Haxby, J V
1994-04-01
Multiple visual areas in the cortex of nonhuman primates are organized into two hierarchically organized and functionally specialized processing pathways, a 'ventral stream' for object vision and a 'dorsal stream' for spatial vision. Recent findings from positron emission tomography activation studies have localized these pathways within the human brain, yielding insights into cortical hierarchies, specialization of function, and attentional mechanisms.
Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffith, Douglas; Greitzer, Frank L.
Abstract--The purpose of this paper is to re-address the vision of human-computer symbiosis as originally expressed by J.C.R. Licklider nearly a half-century ago. We describe this vision, place it in some historical context relating to the evolution of human factors research, and we observe that the field is now in the process of re-invigorating Licklider’s vision. We briefly assess the state of the technology within the context of contemporary theory and practice, and we describe what we regard as this emerging field of neo-symbiosis. We offer some initial thoughts on requirements to define functionality of neo-symbiotic systems and discuss researchmore » challenges associated with their development and evaluation.« less
On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics
NASA Astrophysics Data System (ADS)
Zhu, Zhongjie; Wang, Yuer; Bai, Yongqiang; Jiang, Gangyi
2010-12-01
The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D) model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS) such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.
The Ontology of Vision. The Invisible, Consciousness of Living Matter
Fiorio, Giorgia
2016-01-01
If I close my eyes, the absence of light activates the peripheral cells devoted to the perception of darkness. The awareness of “seeing oneself seeing” is in its essence a thought, one that is internal to the vision and previous to any object of sight. To this amphibious faculty, the “diaphanous color of darkness,” Aristotle assigns the principle of knowledge. “Vision is a whole perceptual system, not a channel of sense.” Functions of vision are interwoven with the texture of human interaction within a terrestrial environment that is in turn contained into the cosmic order. A transitive host within the resonance of an inner-outer environment, the human being is the contact-term between two orders of scale, both bigger and smaller than the individual unity. In the perceptual integrative system of human vision, the convergence-divergence of the corporeal presence and the diffraction of its own appearance is the margin. The sensation of being no longer coincides with the breath of life, it does not seems “real” without the trace of some visible evidence and its simultaneous “sharing”. Without a shadow, without an imprint, the numeric copia of the physical presence inhabits the transient memory of our electronic prostheses. A rudimentary “visuality” replaces tangible experience dissipating its meaning and the awareness of being alive. Transversal to the civilizations of the ancient world, through different orders of function and status, the anthropomorphic “figuration” of archaic sculpture addressees the margin between Being and Non-Being. Statuary human archetypes are not meant to be visible, but to exist as vehicles of transcendence to outlive the definition of human space-time. The awareness of individual finiteness seals the compulsion to “give body” to an invisible apparition shaping the figuration of an ontogenetic expression of human consciousness. Subject and object, the term “humanum” fathoms the relationship between matter and its living dimension, “this de facto vision and the ‘there is’ which it contains.” The project reconsiders the dialectic between the terms vision–presence in the contemporary perception of archaic human statuary according to the transcendent meaning of its immaterial legacy. PMID:27014106
NASA Astrophysics Data System (ADS)
Kleinmann, Johanna; Wueller, Dietmar
2007-01-01
Since the signal to noise measuring method as standardized in the normative part of ISO 15739:2002(E)1 does not quantify noise in a way that matches the perception of the human eye, two alternative methods have been investigated which may be appropriate to quantify the noise perception in a physiological manner: - the model of visual noise measurement proposed by Hung et al2 (as described in the informative annex of ISO 15739:20021) which tries to simulate the process of human vision by using the opponent space and contrast sensitivity functions and uses the CIEL*u*v*1976 colour space for the determination of a so called visual noise value. - The S-CIELab model and CIEDE2000 colour difference proposed by Fairchild et al 3 which simulates human vision approximately the same way as Hung et al2 but uses an image comparison afterwards based on CIEDE2000. With a psychophysical experiment based on just noticeable difference (JND), threshold images could be defined, with which the two approaches mentioned above were tested. The assumption is that if the method is valid, the different threshold images should get the same 'noise value'. The visual noise measurement model results in similar visual noise values for all the threshold images. The method is reliable to quantify at least the JND for noise in uniform areas of digital images. While the visual noise measurement model can only evaluate uniform colour patches in images, the S-CIELab model can be used on images with spatial content as well. The S-CIELab model also results in similar colour difference values for the set of threshold images, but with some limitations: for images which contain spatial structures besides the noise, the colour difference varies depending on the contrast of the spatial content.
Task-focused modeling in automated agriculture
NASA Astrophysics Data System (ADS)
Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack
1993-01-01
Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.
Avian vision and the evolution of egg color mimicry in the common cuckoo.
Stoddard, Mary Caswell; Stevens, Martin
2011-07-01
Coevolutionary arms races are a potent force in evolution, and brood parasite-host dynamics provide classical examples. Different host-races of the common cuckoo, Cuculus canorus, lay eggs in the nests of other species, leaving all parental care to hosts. Cuckoo eggs often (but not always) appear to match remarkably the color and pattern of host eggs, thus reducing detection by hosts. However, most studies of egg mimicry focus on human assessments or reflectance spectra, which fail to account for avian vision. Here, we use discrimination and tetrachromatic color space modeling of bird vision to quantify egg background and spot color mimicry in the common cuckoo and 11 of its principal hosts, and we relate this to egg rejection by different hosts. Egg background color and luminance are strongly mimicked by most cuckoo host-races, and mimicry is better when hosts show strong rejection. We introduce a novel measure of color mimicry-"color overlap"-and show that cuckoo and host background colors increasingly overlap in avian color space as hosts exhibit stronger rejection. Finally, cuckoos with better background color mimicry also have better pattern mimicry. Our findings reveal new information about egg mimicry that would be impossible to derive by the human eye. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
NASA Technical Reports Server (NTRS)
1972-01-01
A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
Human microbiome science: vision for the future, Bethesda, MD, July 24 to 26, 2013
2014-01-01
A conference entitled ‘Human microbiome science: Vision for the future’ was organized in Bethesda, MD from July 24 to 26, 2013. The event brought together experts in the field of human microbiome research and aimed at providing a comprehensive overview of the state of microbiome research, but more importantly to identify and discuss gaps, challenges and opportunities in this nascent field. This report summarizes the presentations but also describes what is needed for human microbiome research to move forward and deliver medical translational applications.
The lawful imprecision of human surface tilt estimation in natural scenes
2018-01-01
Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. PMID:29384477
The lawful imprecision of human surface tilt estimation in natural scenes.
Kim, Seha; Burge, Johannes
2018-01-31
Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. © 2018, Kim et al.
NASA Human Health and Performance Strategy
NASA Technical Reports Server (NTRS)
Davis, Jeffrey R.
2012-01-01
In May 2007, what was then the Space Life Sciences Directorate, issued the 2007 Space Life Sciences Strategy for Human Space Exploration. In January 2012, leadership and key directorate personnel were once again brought together to assess the current and expected future environment against its 2007 Strategy and the Agency and Johnson Space Center goals and strategies. The result was a refined vision and mission, and revised goals, objectives, and strategies. One of the first changes implemented was to rename the directorate from Space Life Sciences to Human Health and Performance to better reflect our vision and mission. The most significant change in the directorate from 2007 to the present is the integration of the Human Research Program and Crew Health and Safety activities. Subsequently, the Human Health and Performance Directorate underwent a reorganization to achieve enhanced integration of research and development with operations to better support human spaceflight and International Space Station utilization. These changes also enable a more effective and efficient approach to human system risk mitigation. Since 2007, we have also made significant advances in external collaboration and implementation of new business models within the directorate and the Agency, and through two newly established virtual centers, the NASA Human Health and Performance Center and the Center of Excellence for Collaborative Innovation. Our 2012 Strategy builds upon these successes to address the Agency s increased emphasis on societal relevance and being a leader in research and development and innovative business and communications practices. The 2012 Human Health and Performance Vision is to lead the world in human health and performance innovations for life in space and on Earth. Our mission is to enable optimization of human health and performance throughout all phases of spaceflight. All HHPD functions are ultimately aimed at achieving this mission. Our activities enable mission success, optimizing human health and productivity in space before, during, and after the actual spaceflight experience of our crews, and include support for ground-based functions. Many of our spaceflight innovations also provide solutions for terrestrial challenges, thereby enhancing life on Earth. Our strategic goals are aimed at leading human exploration and ISS utilization, leading human health and performance internationally, excelling in management and advancement of innovations in health and human system integration, and expanding relevance to life on Earth and creating enduring support and enthusiasm for space exploration.
NASA Human Health and Performance Strategy
NASA Technical Reports Server (NTRS)
Davis, Jeffrey R.
2012-01-01
In May 2007, what was then the Space Life Sciences Directorate, issued the 2007 Space Life Sciences Strategy for Human Space Exploration. In January 2012, leadership and key directorate personnel were once again brought together to assess the current and expected future environment against its 2007 Strategy and the Agency and Johnson Space Center goals and strategies. The result was a refined vision and mission, and revised goals, objectives, and strategies. One of the first changes implemented was to rename the directorate from Space Life Sciences to Human Health and Performance to better reflect our vision and mission. The most significant change in the directorate from 2007 to the present is the integration of the Human Research Program and Crew Health and Safety activities. Subsequently, the Human Health and Performance Directorate underwent a reorganization to achieve enhanced integration of research and development with operations to better support human spaceflight and International Space Station utilization. These changes also enable a more effective and efficient approach to human system risk mitigation. Since 2007, we have also made significant advances in external collaboration and implementation of new business models within the directorate and the Agency, and through two newly established virtual centers, the NASA Human Health and Performance Center and the Center of Excellence for Collaborative Innovation. Our 2012 Strategy builds upon these successes to address the Agency's increased emphasis on societal relevance and being a leader in research and development and innovative business and communications practices. The 2012 Human Health and Performance Vision is to lead the world in human health and performance innovations for life in space and on Earth. Our mission is to enable optimization of human health and performance throughout all phases of spaceflight. All HH&P functions are ultimately aimed at achieving this mission. Our activities enable mission success, optimizing human health and productivity in space before, during, and after the actual spaceflight experience of our crews, and include support for ground-- based functions. Many of our spaceflight innovations also provide solutions for terrestrial challenges, thereby enhancing life on Earth. Our strategic goals are aimed at leading human exploration and ISS utilization, leading human health and performance internationally, excelling in management and advancement of innovations in health and human system integration, and expanding relevance to life on Earth and creating enduring support and enthusiasm for space exploration.
A Vision for the Exploration of Mars: Robotic Precursors Followed by Humans to Mars Orbit in 2033
NASA Technical Reports Server (NTRS)
Sellers, Piers J.; Garvin, James B.; Kinney, Anne L.; Amato, Michael J.; White, Nicholas E.
2012-01-01
The reformulation of the Mars program gives NASA a rare opportunity to deliver a credible vision in which humans, robots, and advancements in information technology combine to open the deep space frontier to Mars. There is a broad challenge in the reformulation of the Mars exploration program that truly sets the stage for: 'a strategic collaboration between the Science Mission Directorate (SMD), the Human Exploration and Operations Mission Directorate (HEOMD) and the Office of the Chief Technologist, for the next several decades of exploring Mars'.Any strategy that links all three challenge areas listed into a true long term strategic program necessitates discussion. NASA's SMD and HEOMD should accept the President's challenge and vision by developing an integrated program that will enable a human expedition to Mars orbit in 2033 with the goal of returning samples suitable for addressing the question of whether life exists or ever existed on Mars
A conceptual model for vision rehabilitation
Roberts, Pamela S.; Rizzo, John-Ross; Hreha, Kimberly; Wertheimer, Jeffrey; Kaldenberg, Jennifer; Hironaka, Dawn; Riggs, Richard; Colenbrander, August
2017-01-01
Vision impairments are highly prevalent after acquired brain injury (ABI). Conceptual models that focus on constructing intellectual frameworks greatly facilitate comprehension and implementation of practice guidelines in an interprofessional setting. The purpose of this article is to provide a review of the vision literature in ABI, describe a conceptual model for vision rehabilitation, explain its potential clinical inferences, and discuss its translation into rehabilitation across multiple practice settings and disciplines. PMID:27997671
A conceptual model for vision rehabilitation.
Roberts, Pamela S; Rizzo, John-Ross; Hreha, Kimberly; Wertheimer, Jeffrey; Kaldenberg, Jennifer; Hironaka, Dawn; Riggs, Richard; Colenbrander, August
2016-01-01
Vision impairments are highly prevalent after acquired brain injury (ABI). Conceptual models that focus on constructing intellectual frameworks greatly facilitate comprehension and implementation of practice guidelines in an interprofessional setting. The purpose of this article is to provide a review of the vision literature in ABI, describe a conceptual model for vision rehabilitation, explain its potential clinical inferences, and discuss its translation into rehabilitation across multiple practice settings and disciplines.
Visions for Space Exploration: ILS Issues and Approaches
NASA Technical Reports Server (NTRS)
Watson, Kevin
2005-01-01
This viewgraph presentation reviews some of the logistic issues that the Vision for Space Exploration will entail. There is a review of the vision and the timeline for the return to the moon that will lead to the first human exploration of Mars. The lessons learned from the International Space Station (ISS) and other such missions are also reviewed.
Modeling the convergence accommodation of stereo vision for binocular endoscopy.
Gao, Yuanqian; Li, Jinhua; Li, Jianmin; Wang, Shuxin
2018-02-01
The stereo laparoscope is an important tool for achieving depth perception in robot-assisted minimally invasive surgery (MIS). A dynamic convergence accommodation algorithm is proposed to improve the viewing experience and achieve accurate depth perception. Based on the principle of the human vision system, a positional kinematic model of the binocular view system is established. The imaging plane pair is rectified to ensure that the two rectified virtual optical axes intersect at the fixation target to provide immersive depth perception. Stereo disparity was simulated with the roll and pitch movements of the binocular system. The chessboard test and the endoscopic peg transfer task were performed, and the results demonstrated the improved disparity distribution and robustness of the proposed convergence accommodation method with respect to the position of the fixation target. This method offers a new solution for effective depth perception with the stereo laparoscopes used in robot-assisted MIS. Copyright © 2017 John Wiley & Sons, Ltd.
A wonderful laboratory and a great researcher
NASA Astrophysics Data System (ADS)
Sheikh, N. M.
2004-05-01
It was great to be associated with Prof. Dr. Karl Rawer. He devoted his life to make use of the wonderful laboratory of Nature, the Ionosphere. Through acquisition of the experimental data from AEROS satellites and embedding it with data from ground stations, it was possible to achieve a better empirical model, the International Reference Ionosphere. Prof. Dr. Karl Rawer has been as dynamic as the Ionosphere. His vision about the ionospheric data is exceptional and has helped the scientific and engineering community to make use of his vision in advancing the dimensions of empirical modelling. As a human being, Prof. Dr. Karl Rawer has all the traits of an angel from Heaven. In short he developed a large team of researchers forming a blooming tree from the parent node. Ionosphere still plays an important role in over the horizon HF Radar and GPs satellite data reduction.
Active imaging system performance model for target acquisition
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Teaney, Brian; Nguyen, Quang; Jacobs, Eddie L.; Halford, Carl E.; Tofsted, David H.
2007-04-01
The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has developed a laser-range-gated imaging system performance model for the detection, recognition, and identification of vehicle targets. The model is based on the established US Army RDECOM CERDEC NVESD sensor performance models of the human system response through an imaging system. The Java-based model, called NVLRG, accounts for the effect of active illumination, atmospheric attenuation, and turbulence effects relevant to LRG imagers, such as speckle and scintillation, and for the critical sensor and display components. This model can be used to assess the performance of recently proposed active SWIR systems through various trade studies. This paper will describe the NVLRG model in detail, discuss the validation of recent model components, present initial trade study results, and outline plans to validate and calibrate the end-to-end model with field data through human perception testing.
Extracting heading and temporal range from optic flow: Human performance issues
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Perrone, John A.; Stone, Leland; Banks, Martin S.; Crowell, James A.
1993-01-01
Pilots are able to extract information about their vehicle motion and environmental structure from dynamic transformations in the out-the-window scene. In this presentation, we focus on the information in the optic flow which specifies vehicle heading and distance to objects in the environment, scaled to a temporal metric. In particular, we are concerned with modeling how the human operators extract the necessary information, and what factors impact their ability to utilize the critical information. In general, the psychophysical data suggest that the human visual system is fairly robust to degradations in the visual display, e.g., reduced contrast and resolution or restricted field of view. However, extraneous motion flow, i.e., introduced by sensor rotation, greatly compromises human performance. The implications of these models and data for enhanced/synthetic vision systems are discussed.
NASA Astrophysics Data System (ADS)
Lauinger, N.
2007-09-01
A better understanding of the color constancy mechanism in human color vision [7] can be reached through analyses of photometric data of all illuminants and patches (Mondrians or other visible objects) involved in visual experiments. In Part I [3] and in [4, 5 and 6] the integration in the human eye of the geometrical-optical imaging hardware and the diffractive-optical hardware has been described and illustrated (Fig.1). This combined hardware represents the main topic of the NAMIROS research project (nano- and micro- 3D gratings for optical sensors) [8] promoted and coordinated by Corrsys 3D Sensors AG. The hardware relevant to (photopic) human color vision can be described as a diffractive or interference-optical correlator transforming incident light into diffractive-optical RGB data and relating local RGB onto global RGB data in the near-field behind the 'inverted' human retina. The relative differences at local/global RGB interference-optical contrasts are available to photoreceptors (cones and rods) only after this optical pre-processing.
Proceedings of the international conference on cybernetics and societ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1985-01-01
This book presents the papers given at a conference on artificial intelligence, expert systems and knowledge bases. Topics considered at the conference included automating expert system development, modeling expert systems, causal maps, data covariances, robot vision, image processing, multiprocessors, parallel processing, VLSI structures, man-machine systems, human factors engineering, cognitive decision analysis, natural language, computerized control systems, and cybernetics.
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
Long-term safety of human retinal progenitor cell transplantation in retinitis pigmentosa patients.
Liu, Yong; Chen, Shao Jun; Li, Shi Ying; Qu, Ling Hui; Meng, Xiao Hong; Wang, Yi; Xu, Hai Wei; Liang, Zhi Qing; Yin, Zheng Qin
2017-09-29
Retinitis pigmentosa is a common genetic disease that causes retinal degeneration and blindness for which there is currently no curable treatment available. Vision preservation was observed in retinitis pigmentosa animal models after retinal stem cell transplantation. However, long-term safety studies and visual assessment have not been thoroughly tested in retinitis pigmentosa patients. In our pre-clinical study, purified human fetal-derived retinal progenitor cells (RPCs) were transplanted into the diseased retina of Royal College of Surgeons (RCS) rats, a model of retinal degeneration. Based on these results, we conducted a phase I clinical trial to establish the safety and tolerability of transplantation of RPCs in eight patients with advanced retinitis pigmentosa. Patients were studied for 24 months. After RPC transplantation in RCS rats, we observed moderate recovery of vision and maintenance of the outer nuclear layer thickness. Most importantly, we did not find tumor formation or immune rejection. In the retinis pigmentosa patients given RPC injections, we also did not observe immunological rejection or tumorigenesis when immunosuppressive agents were not administered. We observed a significant improvement in visual acuity (P < 0.05) in five patients and an increase in retinal sensitivity of pupillary responses in three of the eight patients between 2 and 6 months after the transplant, but this improvement did not appear by 12 months. Our study for the first time confirmed the long-term safety and feasibility of vision repair by stem cell therapy in patients blinded by retinitis pigmentosa. WHO Trial Registration, ChiCTR-TNRC-08000193 . Retrospectively registered on 5 December 2008.
Individual vision and peak distribution in collective actions
NASA Astrophysics Data System (ADS)
Lu, Peng
2017-06-01
People make decisions on whether they should participate as participants or not as free riders in collective actions with heterogeneous visions. Besides of the utility heterogeneity and cost heterogeneity, this work includes and investigates the effect of vision heterogeneity by constructing a decision model, i.e. the revised peak model of participants. In this model, potential participants make decisions under the joint influence of utility, cost, and vision heterogeneities. The outcomes of simulations indicate that vision heterogeneity reduces the values of peaks, and the relative variance of peaks is stable. Under normal distributions of vision heterogeneity and other factors, the peaks of participants are normally distributed as well. Therefore, it is necessary to predict distribution traits of peaks based on distribution traits of related factors such as vision heterogeneity and so on. We predict the distribution of peaks with parameters of both mean and standard deviation, which provides the confident intervals and robust predictions of peaks. Besides, we validate the peak model of via the Yuyuan Incident, a real case in China (2014), and the model works well in explaining the dynamics and predicting the peak of real case.
Visual brain plasticity induced by central and peripheral visual field loss.
Sanda, Nicolae; Cerliani, Leonardo; Authié, Colas N; Sabbah, Norman; Sahel, José-Alain; Habas, Christophe; Safran, Avinoam B; Thiebaut de Schotten, Michel
2018-06-23
Disorders that specifically affect central and peripheral vision constitute invaluable models to study how the human brain adapts to visual deafferentation. We explored cortical changes after the loss of central or peripheral vision. Cortical thickness (CoTks) and resting-state cortical entropy (rs-CoEn), as a surrogate for neural and synaptic complexity, were extracted in 12 Stargardt macular dystrophy, 12 retinitis pigmentosa (tunnel vision stage), and 14 normally sighted subjects. When compared to controls, both groups with visual loss exhibited decreased CoTks in dorsal area V3d. Peripheral visual field loss also showed a specific CoTks decrease in early visual cortex and ventral area V4, while central visual field loss in dorsal area V3A. Only central visual field loss exhibited increased CoEn in LO-2 area and FG1. Current results revealed biomarkers of brain plasticity within the dorsal and the ventral visual streams following central and peripheral visual field defects.
Probabilistic Modeling and Visualization of the Flexibility in Morphable Models
NASA Astrophysics Data System (ADS)
Lüthi, M.; Albrecht, T.; Vetter, T.
Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.
Binocular combination in abnormal binocular vision
Ding, Jian; Klein, Stanley A.; Levi, Dennis M.
2013-01-01
We investigated suprathreshold binocular combination in humans with abnormal binocular visual experience early in life. In the first experiment we presented the two eyes with equal but opposite phase shifted sine waves and measured the perceived phase of the cyclopean sine wave. Normal observers have balanced vision between the two eyes when the two eyes' images have equal contrast (i.e., both eyes contribute equally to the perceived image and perceived phase = 0°). However, in observers with strabismus and/or amblyopia, balanced vision requires a higher contrast image in the nondominant eye (NDE) than the dominant eye (DE). This asymmetry between the two eyes is larger than predicted from the contrast sensitivities or monocular perceived contrast of the two eyes and is dependent on contrast and spatial frequency: more asymmetric with higher contrast and/or spatial frequency. Our results also revealed a surprising NDE-to-DE enhancement in some of our abnormal observers. This enhancement is not evident in normal vision because it is normally masked by interocular suppression. However, in these abnormal observers the NDE-to-DE suppression was weak or absent. In the second experiment, we used the identical stimuli to measure the perceived contrast of a cyclopean grating by matching the binocular combined contrast to a standard contrast presented to the DE. These measures provide strong constraints for model fitting. We found asymmetric interocular interactions in binocular contrast perception, which was dependent on both contrast and spatial frequency in the same way as in phase perception. By introducing asymmetric parameters to the modified Ding-Sperling model including interocular contrast gain enhancement, we succeeded in accounting for both binocular combined phase and contrast simultaneously. Adding binocular contrast gain control to the modified Ding-Sperling model enabled us to predict the results of dichoptic and binocular contrast discrimination experiments and provides new insights into the mechanisms of abnormal binocular vision. PMID:23397039
Binocular combination in abnormal binocular vision.
Ding, Jian; Klein, Stanley A; Levi, Dennis M
2013-02-08
We investigated suprathreshold binocular combination in humans with abnormal binocular visual experience early in life. In the first experiment we presented the two eyes with equal but opposite phase shifted sine waves and measured the perceived phase of the cyclopean sine wave. Normal observers have balanced vision between the two eyes when the two eyes' images have equal contrast (i.e., both eyes contribute equally to the perceived image and perceived phase = 0°). However, in observers with strabismus and/or amblyopia, balanced vision requires a higher contrast image in the nondominant eye (NDE) than the dominant eye (DE). This asymmetry between the two eyes is larger than predicted from the contrast sensitivities or monocular perceived contrast of the two eyes and is dependent on contrast and spatial frequency: more asymmetric with higher contrast and/or spatial frequency. Our results also revealed a surprising NDE-to-DE enhancement in some of our abnormal observers. This enhancement is not evident in normal vision because it is normally masked by interocular suppression. However, in these abnormal observers the NDE-to-DE suppression was weak or absent. In the second experiment, we used the identical stimuli to measure the perceived contrast of a cyclopean grating by matching the binocular combined contrast to a standard contrast presented to the DE. These measures provide strong constraints for model fitting. We found asymmetric interocular interactions in binocular contrast perception, which was dependent on both contrast and spatial frequency in the same way as in phase perception. By introducing asymmetric parameters to the modified Ding-Sperling model including interocular contrast gain enhancement, we succeeded in accounting for both binocular combined phase and contrast simultaneously. Adding binocular contrast gain control to the modified Ding-Sperling model enabled us to predict the results of dichoptic and binocular contrast discrimination experiments and provides new insights into the mechanisms of abnormal binocular vision.
A vision-based fall detection algorithm of human in indoor environment
NASA Astrophysics Data System (ADS)
Liu, Hao; Guo, Yongcai
2017-02-01
Elderly care becomes more and more prominent in China as the population is aging fast and the number of aging population is large. Falls, as one of the biggest challenges in elderly guardianship system, have a serious impact on both physical health and mental health of the aged. Based on feature descriptors, such as aspect ratio of human silhouette, velocity of mass center, moving distance of head and angle of the ultimate posture, a novel vision-based fall detection method was proposed in this paper. A fast median method of background modeling with three frames was also suggested. Compared with the conventional bounding box and ellipse method, the novel fall detection technique is not only applicable for recognizing the fall behaviors end of lying down but also suitable for detecting the fall behaviors end of kneeling down and sitting down. In addition, numerous experiment results showed that the method had a good performance in recognition accuracy on the premise of not adding the cost of time.
Zebrafish—on the move towards ophthalmological research
Chhetri, J; Jacobson, G; Gueven, N
2014-01-01
Millions of people are affected by visual impairment and blindness globally, and the prevalence of vision loss is likely to increase as we are living longer. However, many ocular diseases remain poorly controlled due to lack of proper understanding of the pathogenesis and the corresponding lack of effective therapies. Consequently, there is a major need for animal models that closely mirror the human eye pathology and at the same time allow higher-throughput drug screening approaches. In this context, zebrafish as an animal model organism not only address these needs but can in many respects reflect the human situation better than the current rodent models. Over the past decade, zebrafish have become an established model to study a variety of human diseases and are more recently becoming a valuable tool for the study of human ophthalmological disorders. Many human ocular diseases such as cataract, glaucoma, diabetic retinopathy, and age-related macular degeneration have already been modelled in zebrafish. In addition, zebrafish have become an attractive model for pre-clinical drug toxicity testing and are now increasingly used by scientists worldwide for the discovery of novel treatment approaches. This review presents the advantages and uses of zebrafish for ophthalmological research. PMID:24503724
Moving vehicles segmentation based on Gaussian motion model
NASA Astrophysics Data System (ADS)
Zhang, Wei; Fang, Xiang Z.; Lin, Wei Y.
2005-07-01
Moving objects segmentation is a challenge in computer vision. This paper focuses on the segmentation of moving vehicles in dynamic scene. We analyses the psychology of human vision and present a framework for segmenting moving vehicles in the highway. The proposed framework consists of two parts. Firstly, we propose an adaptive background update method in which the background is updated according to the change of illumination conditions and thus can adapt to the change of illumination sensitively. Secondly, we construct a Gaussian motion model to segment moving vehicles, in which the motion vectors of the moving pixels are modeled as a Gaussian model and an on-line EM algorithm is used to update the model. The Gaussian distribution of the adaptive model is elevated to determine which moving vectors result from moving vehicles and which from other moving objects such as waving trees. Finally, the pixels with motion vector result from the moving vehicles are segmented. Experimental results of several typical scenes show that the proposed model can detect the moving vehicles correctly and is immune from influence of the moving objects caused by the waving trees and the vibration of camera.
Vision, Leadership, and Change: The Case of Ramah Summer Camps
ERIC Educational Resources Information Center
Reimer, Joseph
2010-01-01
In his retrospective essay, Seymour Fox (1997) identified "vision" as the essential element that shaped the Ramah camp system. I will take a critical look at Fox's main claims: (1) A particular model of vision was essential to the development of Camp Ramah; and (2) That model of vision should guide contemporary Jewish educators in creating Jewish…
The Elementary Operations of Human Vision Are Not Reducible to Template-Matching
Neri, Peter
2015-01-01
It is generally acknowledged that biological vision presents nonlinear characteristics, yet linear filtering accounts of visual processing are ubiquitous. The template-matching operation implemented by the linear-nonlinear cascade (linear filter followed by static nonlinearity) is the most widely adopted computational tool in systems neuroscience. This simple model achieves remarkable explanatory power while retaining analytical tractability, potentially extending its reach to a wide range of systems and levels in sensory processing. The extent of its applicability to human behaviour, however, remains unclear. Because sensory stimuli possess multiple attributes (e.g. position, orientation, size), the issue of applicability may be asked by considering each attribute one at a time in relation to a family of linear-nonlinear models, or by considering all attributes collectively in relation to a specified implementation of the linear-nonlinear cascade. We demonstrate that human visual processing can operate under conditions that are indistinguishable from linear-nonlinear transduction with respect to substantially different stimulus attributes of a uniquely specified target signal with associated behavioural task. However, no specific implementation of a linear-nonlinear cascade is able to account for the entire collection of results across attributes; a satisfactory account at this level requires the introduction of a small gain-control circuit, resulting in a model that no longer belongs to the linear-nonlinear family. Our results inform and constrain efforts at obtaining and interpreting comprehensive characterizations of the human sensory process by demonstrating its inescapably nonlinear nature, even under conditions that have been painstakingly fine-tuned to facilitate template-matching behaviour and to produce results that, at some level of inspection, do conform to linear filtering predictions. They also suggest that compliance with linear transduction may be the targeted outcome of carefully crafted nonlinear circuits, rather than default behaviour exhibited by basic components. PMID:26556758
IEEE 1982. Proceedings of the international conference on cybernetics and society
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-01-01
The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.
To See Anew: New Technologies Are Moving Rapidly Toward Restoring or Enabling Vision in the Blind.
Grifantini, Kristina
2017-01-01
Humans have been using technology to improve their vision for many decades. Eyeglasses, contact lenses, and, more recently, laser-based surgeries are commonly employed to remedy vision problems, both minor and major. But options are far fewer for those who have not seen since birth or who have reached stages of blindness in later life.
Vision, healing brush, and fiber bundles
NASA Astrophysics Data System (ADS)
Georgiev, Todor
2005-03-01
The Healing Brush is a tool introduced for the first time in Adobe Photoshop (2002) that removes defects in images by seamless cloning (gradient domain fusion). The Healing Brush algorithms are built on a new mathematical approach that uses Fibre Bundles and Connections to model the representation of images in the visual system. Our mathematical results are derived from first principles of human vision, related to adaptation transforms of von Kries type and Retinex theory. In this paper we present the new result of Healing in arbitrary color space. In addition to supporting image repair and seamless cloning, our approach also produces the exact solution to the problem of high dynamic range compression of17 and can be applied to other image processing algorithms.
Frontiers in Human Information Processing Conference
2008-02-25
Frontiers in Human Information Processing - Vision, Attention , Memory , and Applications: A Tribute to George Sperling, a Festschrift. We are grateful...with focus on the formal, computational, and mathematical approaches that unify the areas of vision, attention , and memory . The conference also...Information Processing Conference Final Report AFOSR GRANT # FA9550-07-1-0346 The AFOSR Grant # FA9550-07-1-0346 provided partial support for the Conference
Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai
2016-04-01
We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Nixon-Shapiro, Elizabeth
2016-11-01
The role of the physiological processes involved in human vision escapes clarification in current literature. Many unanswered questions about vision include: 1) whether there is more to lateral inhibition than previously proposed, 2) the role of the discs in rods and cones, 3) how inverted images on the retina are converted to erect images for visual perception, 4) what portion of the image formed on the retina is actually processed in the brain, 5) the reason we have an after-image with antagonistic colors, and 6) how we remember space. This theoretical article attempts to clarify some of the physiological processes involved with human vision. The global integration of visual information is conceptual; therefore, we include illustrations to present our theory. Universally, the eyeball is 2.4cm and works together with membrane potential, correspondingly representing the retinal layers, photoreceptors, and cortex. Images formed within the photoreceptors must first be converted into chemical signals on the photoreceptors' individual discs and the signals at each disc are transduced from light photons into electrical signals. We contend that the discs code the electrical signals into accurate distances and are shown in our figures. The pre-existing oscillations among the various cortices including the striate and parietal cortex, and the retina work in unison to create an infrastructure of visual space that functionally "places" the objects within this "neural" space. The horizontal layers integrate all discs accurately to create a retina that is pre-coded for distance. Our theory suggests image inversion never takes place on the retina, but rather images fall onto the retina as compressed and coiled, then amplified through lateral inhibition through intensification and amplification on the OFF-center cones. The intensified and amplified images are decompressed and expanded in the brain, which become the images we perceive as external vision. This is a theoretical article presenting a novel hypothesis about the physiological processes in vision, and expounds upon the visual aspect of two of our previously published articles, "A unified 3D default space consciousness model combining neurological and physiological processes that underlie conscious experience", and "Functional representation of vision within the mind: A visual consciousness model based in 3D default space." Currently, neuroscience teaches that visual images are initially inverted on the retina, processed in the brain, and then conscious perception of vision happens in the visual cortex. Here, we propose that inversion of visual images never takes place because images enter the retina as coiled and compressed graded potentials that are intensified and amplified in OFF-center photoreceptors. Once they reach the brain, they are decompressed and expanded to the original size of the image, which is perceived by the brain as the external image. We adduce that pre-existing oscillations (alpha, beta, and gamma) among the various cortices in the brain (including the striate and parietal cortex) and the retina, work together in unison to create an infrastructure of visual space thatfunctionally "places" the objects within a "neural" space. These fast oscillations "bring" the faculties of the cortical activity to the retina, creating the infrastructure of the space within the eye where visual information can be immediately recognized by the brain. By this we mean that the visual (striate) cortex synchronizes the information with the photoreceptors in the retina, and the brain instantaneously receives the already processed visual image, thereby relinquishing the eye from being required to send the information to the brain to be interpreted before it can rise to consciousness. The visual system is a heavily studied area of neuroscience yet very little is known about how vision occurs. We believe that our novel hypothesis provides new insights into how vision becomes part of consciousness, helps to reconcile various previously proposed models, and further elucidates current questions in vision based on our unified 3D default space model. Illustrations are provided to aid in explaining our theory. Copyright © 2016. Published by Elsevier Ltd.
Garcia-Lopez, Pablo
2012-01-01
Neuroculture, conceived as the reciprocal interaction between neuroscience and different areas of human knowledge is influencing our lives under the prism of the latest neuroscientific discoveries. Simultaneously, neuroculture can create new models of thinking that can significantly impact neuroscientists' daily practice. Especially interesting is the interaction that takes place between neuroscience and the arts. This interaction takes place at different, infinite levels and contexts. I contextualize my work inside this neurocultural framework. Through my artwork, I try to give a more natural vision of the human brain, which could help to develop a more humanistic culture. PMID:22363275
Recognizing 3 D Objects from 2D Images Using Structural Knowledge Base of Genetic Views
1988-08-31
technical report. [BIE85] I. Biederman , "Human image understanding: Recent research and a theory", Computer Vision, Graphics, and Image Processing, vol...model bases", Technical Report 87-85, COINS Dept, University of Massachusetts, Amherst, MA 01003, August 1987 . [BUR87b) Burns, J. B. and L. J. Kitchen...34Recognition in 2D images of 3D objects from large model bases using prediction hierarchies", Proc. IJCAI-10, 1987 . [BUR891 J. B. Burns, forthcoming
Drone-Augmented Human Vision: Exocentric Control for Drones Exploring Hidden Areas.
Erat, Okan; Isop, Werner Alexander; Kalkofen, Denis; Schmalstieg, Dieter
2018-04-01
Drones allow exploring dangerous or impassable areas safely from a distant point of view. However, flight control from an egocentric view in narrow or constrained environments can be challenging. Arguably, an exocentric view would afford a better overview and, thus, more intuitive flight control of the drone. Unfortunately, such an exocentric view is unavailable when exploring indoor environments. This paper investigates the potential of drone-augmented human vision, i.e., of exploring the environment and controlling the drone indirectly from an exocentric viewpoint. If used with a see-through display, this approach can simulate X-ray vision to provide a natural view into an otherwise occluded environment. The user's view is synthesized from a three-dimensional reconstruction of the indoor environment using image-based rendering. This user interface is designed to reduce the cognitive load of the drone's flight control. The user can concentrate on the exploration of the inaccessible space, while flight control is largely delegated to the drone's autopilot system. We assess our system with a first experiment showing how drone-augmented human vision supports spatial understanding and improves natural interaction with the drone.
Safe traffic : Vision Zero on the move
DOT National Transportation Integrated Search
2006-03-01
Vision Zero is composed of several basic : elements, each of which affects safety in : road traffic. These concerns ethics, human : capability and tolerance, responsibility, : scientific facts and a realisation that the : different components in the ...
Mantini, Dante; Hasson, Uri; Betti, Viviana; Perrucci, Mauro G.; Romani, Gian Luca; Corbetta, Maurizio; Orban, Guy A.; Vanduffel, Wim
2012-01-01
Evolution-driven functional changes in the primate brain are typically assessed by aligning monkey and human activation maps using cortical surface expansion models. These models use putative homologous areas as registration landmarks, assuming they are functionally correspondent. In cases where functional changes have occurred in an area, this assumption prohibits to reveal whether other areas may have assumed lost functions. Here we describe a method to examine functional correspondences across species. Without making spatial assumptions, we assess similarities in sensory-driven functional magnetic resonance imaging responses between monkey (Macaca mulatta) and human brain areas by means of temporal correlation. Using natural vision data, we reveal regions for which functional processing has shifted to topologically divergent locations during evolution. We conclude that substantial evolution-driven functional reorganizations have occurred, not always consistent with cortical expansion processes. This novel framework for evaluating changes in functional architecture is crucial to building more accurate evolutionary models. PMID:22306809
Sarnoff JND Vision Model for Flat-Panel Design
NASA Technical Reports Server (NTRS)
Brill, Michael H.; Lubin, Jeffrey
1998-01-01
This document describes adaptation of the basic Sarnoff JND Vision Model created in response to the NASA/ARPA need for a general-purpose model to predict the perceived image quality attained by flat-panel displays. The JND model predicts the perceptual ratings that humans will assign to a degraded color-image sequence relative to its nondegraded counterpart. Substantial flexibility is incorporated into this version of the model so it may be used to model displays at the sub-pixel and sub-frame level. To model a display (e.g., an LCD), the input-image data can be sampled at many times the pixel resolution and at many times the digital frame rate. The first stage of the model downsamples each sequence in time and in space to physiologically reasonable rates, but with minimum interpolative artifacts and aliasing. Luma and chroma parts of the model generate (through multi-resolution pyramid representation) a map of differences-between test and reference called the JND map, from which a summary rating predictor is derived. The latest model extensions have done well in calibration against psychophysical data and against image-rating data given a CRT-based front-end. THe software was delivered to NASA Ames and is being integrated with LCD display models at that facility,
Representation of visual gravitational motion in the human vestibular cortex.
Indovina, Iole; Maffei, Vincenzo; Bosco, Gianfranco; Zago, Myrka; Macaluso, Emiliano; Lacquaniti, Francesco
2005-04-15
How do we perceive the visual motion of objects that are accelerated by gravity? We propose that, because vision is poorly sensitive to accelerations, an internal model that calculates the effects of gravity is derived from graviceptive information, is stored in the vestibular cortex, and is activated by visual motion that appears to be coherent with natural gravity. The acceleration of visual targets was manipulated while brain activity was measured using functional magnetic resonance imaging. In agreement with the internal model hypothesis, we found that the vestibular network was selectively engaged when acceleration was consistent with natural gravity. These findings demonstrate that predictive mechanisms of physical laws of motion are represented in the human brain.
Advances in color science: from retina to behavior
Chatterjee, Soumya; Field, Greg D.; Horwitz, Gregory D.; Johnson, Elizabeth N.; Koida, Kowa; Mancuso, Katherine
2010-01-01
Color has become a premier model system for understanding how information is processed by neural circuits, and for investigating the relationships among genes, neural circuits and perception. Both the physical stimulus for color and the perceptual output experienced as color are quite well characterized, but the neural mechanisms that underlie the transformation from stimulus to perception are incompletely understood. The past several years have seen important scientific and technical advances that are changing our understanding of these mechanisms. Here, and in the accompanying minisymposium, we review the latest findings and hypotheses regarding color computations in the retina, primary visual cortex and higher-order visual areas, focusing on non-human primates, a model of human color vision. PMID:21068298
NASA Astrophysics Data System (ADS)
As'ari, M. A.; Sheikh, U. U.
2012-04-01
The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.
A Report on Army Science Planning and Strategy 2016
2017-06-01
Army Research Laboratory (ARL) hosted a series of meetings in fall 2016 to develop a strategic vision for Army Science. Meeting topics were vetted...reduce maturation time . • Support internal Army research efforts to enhance Army investments in multiscale modeling to accelerate the rate of...requirement are research needs including cross-modal approaches to enabling real- time human comprehension under constraints of bandwidth, information
Proactive learning for artificial cognitive systems
NASA Astrophysics Data System (ADS)
Lee, Soo-Young
2010-04-01
The Artificial Cognitive Systems (ACS) will be developed for human-like functions such as vision, auditory, inference, and behavior. Especially, computational models and artificial HW/SW systems will be devised for Proactive Learning (PL) and Self-Identity (SI). The PL model provides bilateral interactions between robot and unknown environment (people, other robots, cyberspace). For the situation awareness in unknown environment it is required to receive audiovisual signals and to accumulate knowledge. If the knowledge is not enough, the PL should improve by itself though internet and others. For human-oriented decision making it is also required for the robot to have self-identify and emotion. Finally, the developed models and system will be mounted on a robot for the human-robot co-existing society. The developed ACS will be tested against the new Turing Test for the situation awareness. The Test problems will consist of several video clips, and the performance of the ACSs will be compared against those of human with several levels of cognitive ability.
Animal models of age related macular degeneration
Pennesi, Mark E.; Neuringer, Martha; Courtney, Robert J.
2013-01-01
Age related macular degeneration (AMD) is the leading cause of vision loss of those over the age of 65 in the industrialized world. The prevalence and need to develop effective treatments for AMD has lead to the development of multiple animal models. AMD is a complex and heterogeneous disease that involves the interaction of both genetic and environmental factors with the unique anatomy of the human macula. Models in mice, rats, rabbits, pigs and non-human primates have recreated many of the histological features of AMD and provided much insight into the underlying pathological mechanisms of this disease. In spite of the large number of models developed, no one model yet recapitulates all of the features of human AMD. However, these models have helped reveal the roles of chronic oxidative damage, inflammation and immune dysregulation, and lipid metabolism in the development of AMD. Models for induced choroidal neovascularization have served as the backbone for testing new therapies. This article will review the diversity of animal models that exist for AMD as well as their strengths and limitations. PMID:22705444
NASA Technical Reports Server (NTRS)
Prinzel, L.J.; Kramer, L.J.
2009-01-01
A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.
Establishing a shared vision in your organization. Winning strategies to empower your team members.
Rinke, W J
1989-01-01
Today's health-care climate demands that you manage your human resources more effectively. Meeting the dual challenges of providing more with less requires that you tap the vast hidden resources that reside in every one of your team members. Harnessing these untapped energies requires that all of your employees clearly understand the purpose, direction, and the desired future state of your laboratory. Once this image is widely shared, your team members will know their roles in the organization and the contributions they can make to attaining the organization's vision. This shared vision empowers people and enhances their self-esteem as they recognize they are accomplishing a worthy goal. You can create and install a shared vision in your laboratory by adhering to a five-step process. The result will be a unity of purpose that will release the untapped human resources in your organization so that you can do more with less.
B. F. Skinner's Utopian Vision: Behind and Beyond Walden Two
Altus, Deborah E; Morris, Edward K
2009-01-01
This paper addresses B. F. Skinner's utopian vision for enhancing social justice and human well-being in his 1948 novel, Walden Two. In the first part, we situate the book in its historical, intellectual, and social context of the utopian genre, address critiques of the book's premises and practices, and discuss the fate of intentional communities patterned on the book. The central point here is that Skinner's utopian vision was not any of Walden Two's practices, except one: the use of empirical methods to search for and discover practices that worked. In the second part, we describe practices in Skinner's book that advance social justice and human well-being under the themes of health, wealth, and wisdom, and then show how the subsequent literature in applied behavior analysis supports Skinner's prescience. Applied behavior analysis is a measure of the success of Skinner's utopian vision: to experiment. PMID:22478531
Intelligent Vision On The SM9O Mini-Computer Basis And Applications
NASA Astrophysics Data System (ADS)
Hawryszkiw, J.
1985-02-01
Distinction has to be made between image processing and vision Image processing finds its roots in the strong tradition of linear signal processing and promotes geometrical transform techniques, such as fi I tering , compression, and restoration. Its purpose is to transform an image for a human observer to easily extract from that image information significant for him. For example edges after a gradient operator, or a specific direction after a directional filtering operation. Image processing consists in fact in a set of local or global space-time transforms. The interpretation of the final image is done by the human observer. The purpose of vision is to extract the semantic content of the image. The machine can then understand that content, and run a process of decision, which turns into an action. Thus, intel I i gent vision depends on - Image processing - Pattern recognition - Artificial intel I igence
Contact Lenses for Color Blindness.
Badawy, Abdel-Rahman; Hassan, Muhammad Umair; Elsherif, Mohamed; Ahmed, Zubair; Yetisen, Ali K; Butt, Haider
2018-06-01
Color vision deficiency (color blindness) is an inherited genetic ocular disorder. While no cure for this disorder currently exists, several methods can be used to increase the color perception of those affected. One such method is the use of color filtering glasses which are based on Bragg filters. While these glasses are effective, they are high cost, bulky, and incompatible with other vision correction eyeglasses. In this work, a rhodamine derivative is incorporated in commercial contact lenses to filter out the specific wavelength bands (≈545-575 nm) to correct color vision blindness. The biocompatibility assessment of the dyed contact lenses in human corneal fibroblasts and human corneal epithelial cells shows no toxicity and cell viability remains at 99% after 72 h. This study demonstrates the potential of the dyed contact lenses in wavelength filtering and color vision deficiency management. © 2018 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Recognizing Materials using Perceptually Inspired Features
Sharan, Lavanya; Liu, Ce; Rosenholtz, Ruth; Adelson, Edward H.
2013-01-01
Our world consists not only of objects and scenes but also of materials of various kinds. Being able to recognize the materials that surround us (e.g., plastic, glass, concrete) is important for humans as well as for computer vision systems. Unfortunately, materials have received little attention in the visual recognition literature, and very few computer vision systems have been designed specifically to recognize materials. In this paper, we present a system for recognizing material categories from single images. We propose a set of low and mid-level image features that are based on studies of human material recognition, and we combine these features using an SVM classifier. Our system outperforms a state-of-the-art system [Varma and Zisserman, 2009] on a challenging database of real-world material categories [Sharan et al., 2009]. When the performance of our system is compared directly to that of human observers, humans outperform our system quite easily. However, when we account for the local nature of our image features and the surface properties they measure (e.g., color, texture, local shape), our system rivals human performance. We suggest that future progress in material recognition will come from: (1) a deeper understanding of the role of non-local surface properties (e.g., extended highlights, object identity); and (2) efforts to model such non-local surface properties in images. PMID:23914070
How virtual reality works: illusions of vision in "real" and virtual environments
NASA Astrophysics Data System (ADS)
Stark, Lawrence W.
1995-04-01
Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.
Real-time stylistic prediction for whole-body human motions.
Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun
2012-01-01
The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.
Aircraft cockpit vision: Math model
NASA Technical Reports Server (NTRS)
Bashir, J.; Singh, R. P.
1975-01-01
A mathematical model was developed to describe the field of vision of a pilot seated in an aircraft. Given the position and orientation of the aircraft, along with the geometrical configuration of its windows, and the location of an object, the model determines whether the object would be within the pilot's external vision envelope provided by the aircraft's windows. The computer program using this model was implemented and is described.
An Automated Classification Technique for Detecting Defects in Battery Cells
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2006-01-01
Battery cell defect classification is primarily done manually by a human conducting a visual inspection to determine if the battery cell is acceptable for a particular use or device. Human visual inspection is a time consuming task when compared to an inspection process conducted by a machine vision system. Human inspection is also subject to human error and fatigue over time. We present a machine vision technique that can be used to automatically identify defective sections of battery cells via a morphological feature-based classifier using an adaptive two-dimensional fast Fourier transformation technique. The initial area of interest is automatically classified as either an anode or cathode cell view as well as classified as an acceptable or a defective battery cell. Each battery cell is labeled and cataloged for comparison and analysis. The result is the implementation of an automated machine vision technique that provides a highly repeatable and reproducible method of identifying and quantifying defects in battery cells.
Human vision is determined based on information theory.
Delgado-Bonal, Alfonso; Martín-Torres, Javier
2016-11-03
It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.
Human vision is determined based on information theory
NASA Astrophysics Data System (ADS)
Delgado-Bonal, Alfonso; Martín-Torres, Javier
2016-11-01
It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.
Human vision is determined based on information theory
Delgado-Bonal, Alfonso; Martín-Torres, Javier
2016-01-01
It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition. PMID:27808236
NASA Astrophysics Data System (ADS)
Thakore, B.; Frierson, T.; Coderre, K.; Lukaszczyk, A.; Karl, A.
2009-04-01
This paper outlines the response of students and young space professionals on the occasion of the 50th Anniversary of the first artificial satellite and the 40th anniversary of the Outer Space Treaty. The contribution has been coordinated by the Space Generation Advisory Council (SGAC) in support of the United Nations Programme on Space Applications. It follows consultation of the SGAC community through a series of meetings, online discussions and online surveys. The first two online surveys collected over 750 different visions from the international community, totaling approximately 276 youth from over 28 countries and builds on previous SGAC policy contributions. A summary of these results was presented as the top 10 visions of today's youth as an invited input to world space leaders gathered at the Symposium on "The future of space exploration: Solutions to earthly problems" held in Boston, USA from April 12-14 2007 and at the United Nations Committee on the Peaceful Uses of Outer Space in May 2007. These key visions suggested the enhancement for humanity's reach beyond this planet - both physically and intellectual. These key visions were themed into three main categories: • Improvement of Human Survival Probability - sustained exploration to become a multi- planet species, humans to Mars, new treaty structures to ensure a secure space environment, etc • Improvement of Human Quality of Life and the Environment - new political systems or astrocracy, benefits of tele-medicine, tele-education, and commercialization of space, new energy and resources: space solar power, etc. • Improvement of Human Knowledge and Understanding - complete survey of extinct and extant life forms, use of space data for advanced environmental monitoring, etc. This paper will summarize the outcomes from a further online survey and represent key recommendations given by international youth advocates on further steps that could be taken by space agencies and organizations to make the top 10 visions a reality. In turn the online discussions that are used to engage the youth audience would be recorded and would help to reflect the confidence of the younger generation in these visions. The categories listed above would also be investigated further from the technology, policy and ethical aspects. Recent activities in development to further disseminate the necessary connections between using of space technology for solving global challenges is discussed.
Stephens, Martin L.; Barrow, Craig; Andersen, Melvin E.; Boekelheide, Kim; Carmichael, Paul L.; Holsapple, Michael P.; Lafranconi, Mark
2012-01-01
The U.S. National Research Council (NRC) report on “Toxicity Testing in the 21st century” calls for a fundamental shift in the way that chemicals are tested for human health effects and evaluated in risk assessments. The new approach would move toward in vitro methods, typically using human cells in a high-throughput context. The in vitro methods would be designed to detect significant perturbations to “toxicity pathways,” i.e., key biological pathways that, when sufficiently perturbed, lead to adverse health outcomes. To explore progress on the report’s implementation, the Human Toxicology Project Consortium hosted a workshop on 9–10 November 2010 in Washington, DC. The Consortium is a coalition of several corporations, a research institute, and a non-governmental organization dedicated to accelerating the implementation of 21st-century Toxicology as aligned with the NRC vision. The goal of the workshop was to identify practical and scientific ways to accelerate implementation of the NRC vision. The workshop format consisted of plenary presentations, breakout group discussions, and concluding commentaries. The program faculty was drawn from industry, academia, government, and public interest organizations. Most presentations summarized ongoing efforts to modernize toxicology testing and approaches, each with some overlap with the NRC vision. In light of these efforts, the workshop identified recommendations for accelerating implementation of the NRC vision, including greater strategic coordination and planning across projects (facilitated by a steering group), the development of projects that test the proof of concept for implementation of the NRC vision, and greater outreach and communication across stakeholder communities. PMID:21948868
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-12
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Knowledge-based vision and simple visual machines.
Cliff, D; Noble, J
1997-01-01
The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
NASA Astrophysics Data System (ADS)
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
3 CFR 8635 - Proclamation 8635 of March 4, 2011. Save Your Vision Week, 2011
Code of Federal Regulations, 2012 CFR
2012-01-01
... Americans to take action to safeguard their eyesight. Vision is important to our everyday activities, and we... of Health and Human Services’ science-based agenda to prevent disease and promote health, our country...
21st Century Lunar Exploration: Advanced Radiation Exposure Assessment
NASA Technical Reports Server (NTRS)
Anderson, Brooke; Clowdsley, Martha; Wilson, John; Nealy, John; Luetke, Nathan
2006-01-01
On January 14, 2004 President George W Bush outlined a new vision for NASA that has humans venturing back to the moon by 2020. With this ambitious goal, new tools and models have been developed to help define and predict the amount of space radiation astronauts will be exposed to during transit and habitation on the moon. A representative scenario is used that includes a trajectory from LEO to a Lunar Base, and simplified CAD models for the transit and habitat structures. For this study galactic cosmic rays, solar proton events, and trapped electron and proton environments are simulated using new dynamic environment models to generate energetic electron, and light and heavy ion fluences. Detailed calculations are presented to assess the human exposure for transit segments and surface stays.
CAD-model-based vision for space applications
NASA Technical Reports Server (NTRS)
Shapiro, Linda G.
1988-01-01
A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.
2017-01-01
When adjusting the contrast setting on a television set, we experience a perceptual change in the global image contrast. But how is that statistic computed? We addressed this using a contrast-matching task for checkerboard configurations of micro-patterns in which the contrasts and spatial spreads of two interdigitated components were controlled independently. When the patterns differed greatly in contrast, the higher contrast determined the perceived global contrast. Crucially, however, low contrast additions of one pattern to intermediate contrasts of the other caused a paradoxical reduction in the perceived global contrast. None of the following metrics/models predicted this: max, linear sum, average, energy, root mean squared (RMS), Legge and Foley. However, a nonlinear gain control model, derived from contrast detection and discrimination experiments, incorporating wide-field summation and suppression, did predict the results with no free parameters, but only when spatial filtering was removed. We conclude that our model describes fundamental processes in human contrast vision (the pattern of results was the same for expert and naive observers), but that above threshold—when contrast pedestals are clearly visible—vision's spatial filtering characteristics become transparent, tending towards those of a delta function prior to spatial summation. The global contrast statistic from our model is as easily derived as the RMS contrast of an image, and since it more closely relates to human perception, we suggest it be used as an image contrast metric in practical applications. PMID:28989735
Kang, Incheol; Malpeli, Joseph G
2009-08-01
Contrast thresholds of cells in the dorsal lateral geniculate (LGNd) and medial interlaminar (MIN) nuclei of awake cats were measured for scotopic and mesopic vision with drifting sine gratings (1/8, 2, and 4 cycles/deg [cpd]; 4-Hz temporal frequency). Thresholds for mean firing rate (F0) and temporally modulated responses (F1) were derived with receiver-operating-characteristic analyses and compared with behavioral measures recently reported by Kang and colleagues. Behavioral sensitivity was predicted by the neural responses of the most sensitive combinations of cell class and response mode: Y-cell F1 responses for 1/8 cpd, X-cell F1 responses for 2 cpd, and Y-cell F0 responses for 4 cpd. All previous estimates of neural scotopic increment thresholds in animal models fell between Weber's law (proportional to retinal illuminance) and the deVries-Rose law (proportional to the square root of illuminance). However, psychophysical experiments suggest that under appropriate conditions human scotopic vision follows the deVries-Rose law. If behavioral sensitivity is assumed to be determined by the most sensitive class of cells, this discrepancy is resolved. Under scotopic conditions, off-center Y cells were the most sensitive and these followed the deVries-Rose law fairly closely. MIN Y cells were, on average, 0.25 log units more sensitive than LGNd Y cells under scotopic conditions, supporting a previous proposal that the MIN is a specialization of the carnivore for dim-light vision. We conclude that both physiologically and behaviorally, cat and human scotopic vision are fundamentally similar, including adherence to the deVries-Rose law for detection of Gabor functions.
Reinforcement Learning with Autonomous Small Unmanned Aerial Vehicles in Cluttered Environments
NASA Technical Reports Server (NTRS)
Tran, Loc; Cross, Charles; Montague, Gilbert; Motter, Mark; Neilan, James; Qualls, Garry; Rothhaar, Paul; Trujillo, Anna; Allen, B. Danette
2015-01-01
We present ongoing work in the Autonomy Incubator at NASA Langley Research Center (LaRC) exploring the efficacy of a data set aggregation approach to reinforcement learning for small unmanned aerial vehicle (sUAV) flight in dense and cluttered environments with reactive obstacle avoidance. The goal is to learn an autonomous flight model using training experiences from a human piloting a sUAV around static obstacles. The training approach uses video data from a forward-facing camera that records the human pilot's flight. Various computer vision based features are extracted from the video relating to edge and gradient information. The recorded human-controlled inputs are used to train an autonomous control model that correlates the extracted feature vector to a yaw command. As part of the reinforcement learning approach, the autonomous control model is iteratively updated with feedback from a human agent who corrects undesired model output. This data driven approach to autonomous obstacle avoidance is explored for simulated forest environments furthering autonomous flight under the tree canopy research. This enables flight in previously inaccessible environments which are of interest to NASA researchers in Earth and Atmospheric sciences.
ERIC Educational Resources Information Center
Marinoff, Rebecca; Heilberger, Michael H.
2017-01-01
A model Center of Excellence in Low Vision and Vision Rehabilitation was created in a health care setting in China utilizing an inter-institutional relationship with a United States optometric institution. Accomplishments of, limitations to, and stimuli to the provision of low vision and vision rehabilitation services are shared.
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.
Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro
2018-01-01
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro
2018-01-01
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept. PMID:29593521
Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.
Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer
2015-11-01
Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.
A Feasibility Study of View-independent Gait Identification
2012-03-01
ice skates . For walking, the footprint records for single pixels form clusters that are well separated in space and time. (Any overlap of contact...Pattern Recognition 2007, 1-8. Cheng M-H, Ho M-F & Huang C-L (2008), "Gait Analysis for Human Identification Through Manifold Learning and HMM... Learning and Cybernetics 2005, 4516-4521 Moeslund T B & Granum E (2001), "A Survey of Computer Vision-Based Human Motion Capture", Computer Vision
Image-plane processing of visual information
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.
1984-01-01
Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.
Neuroimaging of amblyopia and binocular vision: a review
Joly, Olivier; Frankó, Edit
2014-01-01
Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia). Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarize the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence shows that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterize the brain response changes associated with these treatments and help devise them. PMID:25147511
Neuroimaging of amblyopia and binocular vision: a review.
Joly, Olivier; Frankó, Edit
2014-01-01
Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia). Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarize the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence shows that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterize the brain response changes associated with these treatments and help devise them.
Old and new news about single-photon sensitivity in human vision
NASA Astrophysics Data System (ADS)
Nelson, Philip
It is sometimes said that ``our eyes can see single photons,'' when in fact the faintest flash of light that can reliably be reported by human subjects is closer to 100 photons. Nevertheless, there is a sense in which the familiar claim is true. Experiments conducted long after the seminal work of Hecht, Shlaer, and Pirenne in two distinct realms, those of human psychophysics and single-cell physiology, now admit a more precisem conclusion to be drawn about our visual apparatus. Finding a single framework that accommodates both kinds of result is a nontrivial challenge, and one that sets severe quantitative constraints on any model of dim-light visual processing. I will present one such model and compare it to a recent experiment. Partially supported by the NSF under Grants EF-0928048 and DMR-0832802.
Stanford/NASA-Ames Center of Excellence in model-based human performance
NASA Technical Reports Server (NTRS)
Wandell, Brian A.
1990-01-01
The human operator plays a critical role in many aeronautic and astronautic missions. The Stanford/NASA-Ames Center of Excellence in Model-Based Human Performance (COE) was initiated in 1985 to further our understanding of the performance capabilities and performance limits of the human component of aeronautic and astronautic projects. Support from the COE is devoted to those areas of experimental and theoretical work designed to summarize and explain human performance by developing computable performance models. The ultimate goal is to make these computable models available to other scientists for use in design and evaluation of aeronautic and astronautic instrumentation. Within vision science, two topics have received particular attention. First, researchers did extensive work analyzing the human ability to recognize object color relatively independent of the spectral power distribution of the ambient lighting (color constancy). The COE has supported a number of research papers in this area, as well as the development of a substantial data base of surface reflectance functions, ambient illumination functions, and an associated software package for rendering and analyzing image data with respect to these spectral functions. Second, the COE supported new empirical studies on the problem of selecting colors for visual display equipment to enhance human performance in discrimination and recognition tasks.
1993-04-01
determining effective group functioning, leader-group interaction , and decision making; (2) factors that determine effective, low error human performance...infectious disease and biological defense vaccines and drugs , vision, neurotxins, neurochemistry, molecular neurobiology, neurodegenrative diseases...Potential Rotor/Comprehensive Analysis Model for Rotor Aerodynamics-Johnson Aeronautics (FPR/CAMRAD-JA) code to predict Blade Vortex Interaction (BVI
Summerskill, Stephen; Marshall, Russell; Cook, Sharon; Lenard, James; Richardson, John
2016-03-01
The aim of the study is to understand the nature of blind spots in the vision of drivers of Large Goods Vehicles caused by vehicle design variables such as the driver eye height, and mirror designs. The study was informed by the processing of UK national accident data using cluster analysis to establish if vehicle blind spots contribute to accidents. In order to establish the cause and nature of blind spots six top selling trucks in the UK, with a range of sizes were digitized and imported into the SAMMIE Digital Human Modelling (DHM) system. A novel CAD based vision projection technique, which has been validated in a laboratory study, allowed multiple mirror and window aperture projections to be created, resulting in the identification and quantification of a key blind spot. The identified blind spot was demonstrated to have the potential to be associated with the scenarios that were identified in the accident data. The project led to the revision of UNECE Regulation 46 that defines mirror coverage in the European Union, with new vehicle registrations in Europe being required to meet the amended standard after June of 2015. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Modeling human behaviors and reactions under dangerous environment.
Kang, J; Wright, D K; Qin, S F; Zhao, Y
2005-01-01
This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.
Vision Integrating Strategies in Ophthalmology and Neurochemistry (VISION)
2014-02-01
ganglion cells from pressure-induced damage in a rat model of glaucoma . Brn3b also induced optic nerve regeneration in this model (Stankowska et al. 2013...of glaucoma o Gene therapy with Neuritin1 structurally and functionally protected the retina in ONC model o CHOP knockout mice were structurally and...retinocollicular pathway of mice in a novel model of glaucoma . 2013 Annual Meeting of Association for Research in Vision and Ophthalmology, Abstract 421. Liu
Franco-Trigo, L; Tudball, J; Fam, D; Benrimoj, S I; Sabater-Hernández, D
2018-02-21
Collaboration between relevant stakeholders in health service planning enables service contextualization and facilitates its success and integration into practice. Although community pharmacy services (CPSs) aim to improve patients' health and quality of life, their integration in primary care is far from ideal. Key stakeholders for the development of a CPS intended at preventing cardiovascular disease were identified in a previous stakeholder analysis. Engaging these stakeholders to create a shared vision is the subsequent step to focus planning directions and lay sound foundations for future work. This study aims to develop a stakeholder-shared vision of a cardiovascular care model which integrates community pharmacists and to identify initiatives to achieve this vision. A participatory visioning exercise involving 13 stakeholders across the healthcare system was performed. A facilitated workshop, structured in three parts (i.e., introduction; developing the vision; defining the initiatives towards the vision), was designed. The Chronic Care Model inspired the questions that guided the development of the vision. Workshop transcripts, researchers' notes and materials produced by participants were analyzed using qualitative content analysis. Stakeholders broadened the objective of the vision to focus on the management of chronic diseases. Their vision yielded 7 principles for advanced chronic care: patient-centered care; multidisciplinary team approach; shared goals; long-term care relationships; evidence-based practice; ease of access to healthcare settings and services by patients; and good communication and coordination. Stakeholders also delineated six environmental factors that can influence their implementation. Twenty-four initiatives to achieve the developed vision were defined. The principles and factors identified as part of the stakeholder shared-vision were combined in a preliminary model for chronic care. This model and initiatives can guide policy makers as well as healthcare planners and researchers to develop and integrate chronic disease services, namely CPSs, in real-world settings. Copyright © 2018 Elsevier Inc. All rights reserved.
A Simple Principled Approach for Modeling and Understanding Uniform Color Metrics
Smet, Kevin A.G.; Webster, Michael A.; Whitehead, Lorne A.
2016-01-01
An important goal in characterizing human color vision is to order color percepts in a way that captures their similarities and differences. This has resulted in the continuing evolution of “uniform color spaces,” in which the distances within the space represent the perceptual differences between the stimuli. While these metrics are now very successful in predicting how color percepts are scaled, they do so in largely empirical, ad hoc ways, with limited reference to actual mechanisms of color vision. In this article our aim is to instead begin with general and plausible assumptions about color coding, and then develop a model of color appearance that explicitly incorporates them. We show that many of the features of empirically-defined color order systems (such as those of Munsell, Pantone, NCS, and others) as well as many of the basic phenomena of color perception, emerge naturally from fairly simple principles of color information encoding in the visual system and how it can be optimized for the spectral characteristics of the environment. PMID:26974939
Assessment of OLED displays for vision research.
Cooper, Emily A; Jiang, Haomiao; Vildavski, Vladimir; Farrell, Joyce E; Norcia, Anthony M
2013-10-23
Vision researchers rely on visual display technology for the presentation of stimuli to human and nonhuman observers. Verifying that the desired and displayed visual patterns match along dimensions such as luminance, spectrum, and spatial and temporal frequency is an essential part of developing controlled experiments. With cathode-ray tubes (CRTs) becoming virtually unavailable on the commercial market, it is useful to determine the characteristics of newly available displays based on organic light emitting diode (OLED) panels to determine how well they may serve to produce visual stimuli. This report describes a series of measurements summarizing the properties of images displayed on two commercially available OLED displays: the Sony Trimaster EL BVM-F250 and PVM-2541. The results show that the OLED displays have large contrast ratios, wide color gamuts, and precise, well-behaved temporal responses. Correct adjustment of the settings on both models produced luminance nonlinearities that were well predicted by a power function ("gamma correction"). Both displays have adjustable pixel independence and can be set to have little to no spatial pixel interactions. OLED displays appear to be a suitable, or even preferable, option for many vision research applications.
Sensorimotor Interactions in the Haptic Perception of Virtual Objects
1997-01-01
the human user. 2 Compared to our understanding of vision and audition , our knowledge of the human haptic perception is very limited. Many basic...modalities such as vision and audition on haptic perception of viscosity or mass, for example. 116 Some preliminary work has already been done in this...string[3]; *posx="x" *forf="f’ *velv="v" * acca ="a" trial[64]; resp[64]; /* random number */ /* trial number */ /* index */ /* array holding stim
NASA Technical Reports Server (NTRS)
Juday, Richard D.; Loshin, David S.
1989-01-01
Image coordinate transformations are investigated for possible use in a low vision aid for human patients. These patients typically have field defects with localized retinal dysfunction predominately central (age related maculopathy) or peripheral (retinitis pigmentosa). Previously simple eccentricity-only remappings which do not maintain conformality were shown. Initial attempts on developing images which hold quasi-conformality after remapping are presented. Although the quasi-conformal images may have less local distortion, there are discontinuities in the image which may counterindicate this type of transformation for the low vision application.
Are dogs red-green colour blind?
Siniscalchi, Marcello; d'Ingeo, Serenella; Fornelli, Serena; Quaranta, Angelo
2017-11-01
Neurobiological and molecular studies suggest a dichromatic colour vision in canine species, which appears to be similar to that of human red-green colour blindness. Here, we show that dogs exhibit a behavioural response similar to that of red-green blind human subjects when tested with a modified version of a test commonly used for the diagnosis of human deuteranopia (i.e. the Ishihara's test). Besides contributing to increasing the knowledge about the perceptual ability of dogs, the present work describes for the first time, to our knowledge, a method that can be used to assess colour vision in the animal kingdom.
Visions of human futures in space and SETI
NASA Astrophysics Data System (ADS)
Wright, Jason T.; Oman-Reagan, Michael P.
2018-04-01
We discuss how visions for the futures of humanity in space and SETI are intertwined, and are shaped by prior work in the fields and by science fiction. This appears in the language used in the fields, and in the sometimes implicit assumptions made in discussions of them. We give examples from articulations of the so-called Fermi Paradox, discussions of the settlement of the Solar System (in the near future) and the Galaxy (in the far future), and METI. We argue that science fiction, especially the campy variety, is a significant contributor to the `giggle factor' that hinders serious discussion and funding for SETI and Solar System settlement projects. We argue that humanity's long-term future in space will be shaped by our short-term visions for who goes there and how. Because of the way they entered the fields, we recommend avoiding the term `colony' and its cognates when discussing the settlement of space, as well as other terms with similar pedigrees. We offer examples of science fiction and other writing that broaden and challenge our visions of human futures in space and SETI. In an appendix, we use an analogy with the well-funded and relatively uncontroversial searches for the dark matter particle to argue that SETI's lack of funding in the national science portfolio is primarily a problem of perception, not inherent merit.
Haldin, Charlotte; Nymark, Soile; Aho, Ann-Christine; Koskelainen, Ari; Donner, Kristian
2009-05-06
Human vision is approximately 10 times less sensitive than toad vision on a cool night. Here, we investigate (1) how far differences in the capacity for temporal integration underlie such differences in sensitivity and (2) whether the response kinetics of the rod photoreceptors can explain temporal integration at the behavioral level. The toad was studied as a model that allows experimentation at different body temperatures. Sensitivity, integration time, and temporal accuracy of vision were measured psychophysically by recording snapping at worm dummies moving at different velocities. Rod photoresponses were studied by ERG recording across the isolated retina. In both types of experiments, the general timescale of vision was varied by using two temperatures, 15 and 25 degrees C. Behavioral integration times were 4.3 s at 15 degrees C and 0.9 s at 25 degrees C, and rod integration times were 4.2-4.3 s at 15 degrees C and 1.0-1.3 s at 25 degrees C. Maximal behavioral sensitivity was fivefold lower at 25 degrees C than at 15 degrees C, which can be accounted for by inability of the "warm" toads to integrate light over longer times than the rods. However, the long integration time at 15 degrees C, allowing high sensitivity, degraded the accuracy of snapping toward quickly moving worms. We conclude that temporal integration explains a considerable part of all variation in absolute visual sensitivity. The strong correlation between rods and behavior suggests that the integration time of dark-adapted vision is set by rod phototransduction at the input to the visual system. This implies that there is an inexorable trade-off between temporal integration and resolution.
A dental vision system for accurate 3D tooth modeling.
Zhang, Li; Alemzadeh, K
2006-01-01
This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.
2004-06-01
Additionally, we offer 3 conceptual cartoons outlining our vision for the future progres of laser bioeffects research, metabonomic risk assessment...future progress of laser bioeffects research, metabonomic risk assessment modeling and knowledge building from laser bioeffects data. BACKGROUND In the...our concepts of future laser bioeffects research directions (Figure 5), a metabonomic risk assessment model of laser tissue interaction (Figure 6
Limits of colour vision in dim light.
Kelber, Almut; Lind, Olle
2010-09-01
Humans and most vertebrates have duplex retinae with multiple cone types for colour vision in bright light, and one single rod type for achromatic vision in dim light. Instead of comparing signals from multiple spectral types of photoreceptors, such species use one highly sensitive receptor type thus improving the signal-to-noise ratio at night. However, the nocturnal hawkmoth Deilephila elpenor, the nocturnal bee Xylocopa tranquebarica and the nocturnal gecko Tarentola chazaliae can discriminate colours at extremely dim light intensities. To be able to do so, they sacrifice spatial and temporal resolution in favour of colour vision. We review what is known about colour vision in dim light, and compare colour vision thresholds with the optical sensitivity of the photoreceptors in selected animal species with lens and compound eyes. © 2010 The Authors, Ophthalmic and Physiological Optics © 2010 The College of Optometrists.
Helicopter flights with night-vision goggles: Human factors aspects
NASA Technical Reports Server (NTRS)
Brickner, Michael S.
1989-01-01
Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques.
Search and detection modeling of military imaging systems
NASA Astrophysics Data System (ADS)
Maurer, Tana; Wilson, David L.; Driggers, Ronald G.
2013-04-01
For more than 50 years, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has been studying the science behind the human processes of searching and detecting, and using that knowledge to develop and refine its models for military imaging systems. Modeling how human observers perform military tasks while using imaging systems in the field and linking that model with the physics of the systems has resulted in the comprehensive sensor models we have today. These models are used by the government, military, industry, and academia for sensor development, sensor system acquisition, military tactics development, and war-gaming. From the original hypothesis put forth by John Johnson in 1958, to modeling time-limited search, to modeling the impact of motion on target detection, to modeling target acquisition performance in different spectral bands, the concept of search has a wide-ranging history. Our purpose is to present a snapshot of that history; as such, it will begin with a description of the search-modeling task, followed by a summary of highlights from the early years, and concluding with a discussion of search and detection modeling today and the changing battlefield. Some of the topics to be discussed will be classic search, clutter, computational vision models and the ACQUIRE model with its variants. We do not claim to present a complete history here, but rather a look at some of the work that has been done, and this is meant to be an introduction to an extensive amount of work on a complex topic. That said, it is hoped that this overview of the history of search and detection modeling of military imaging systems pursued by NVESD directly, or in association with other government agencies or contractors, will provide both the novice and experienced search modeler with a useful historical summary and an introduction to current issues and future challenges.
Design and development of a ferroelectric micro photo detector for the bionic eye
NASA Astrophysics Data System (ADS)
Song, Yang
Driven by no effective therapy for Retinitis Pigmentosa and Age Related Macular Degeneration, artificial vision through the development of an artificial retina that can be implanted into the human eye, is being addressed by the Bionic Eye. This dissertation focuses on the study of a photoferroelectric micro photo detector as an implantable retinal prosthesis for vision restoration in patients with above disorders. This implant uses an electrical signal to trigger the appropriate ocular cells of the vision system without resorting to wiring or electrode implantation. The research work includes fabrication of photoferroelectric thin film micro detectors, characterization of these photoferroelectric micro devices as photovoltaic cells, and Finite Element Method (FEM) modeling of the photoferroelectrics and their device-neuron interface. A ferroelectric micro detector exhibiting the photovoltaic effect (PVE) directly adds electrical potential to the neuron membrane outer wall at the focal adhesion regions. The electrical potential then generates a retinal cell membrane potential deflection through a newly developed Direct-Electric-Field-Coupling (DEFC) model. This model is quite different from the traditional electric current model because instead of current directly working on the cell membrane, the PVE current is used to generate a localized high electric potential in the focal adhesion region by working together with the anisotropic high internal impedance of ferroelectric thin films. General electrodes and silicon photodetectors do not have such anisotropy and high impedance, and thus they cannot generate DEFC. This mechanism investigation is very valuable, because it clearly shows that our artificial retina works in a way that is totally different from the traditional current stimulation methods.
Chemistry and biology of the initial steps in vision: the Friedenwald lecture.
Palczewski, Krzysztof
2014-10-22
Visual transduction is the process in the eye whereby absorption of light in the retina is translated into electrical signals that ultimately reach the brain. The first challenge presented by visual transduction is to understand its molecular basis. We know that maintenance of vision is a continuous process requiring the activation and subsequent restoration of a vitamin A-derived chromophore through a series of chemical reactions catalyzed by enzymes in the retina and retinal pigment epithelium (RPE). Diverse biochemical approaches that identified key proteins and reactions were essential to achieve a mechanistic understanding of these visual processes. The three-dimensional arrangements of these enzymes' polypeptide chains provide invaluable insights into their mechanisms of action. A wealth of information has already been obtained by solving high-resolution crystal structures of both rhodopsin and the retinoid isomerase from pigment RPE (RPE65). Rhodopsin, which is activated by photoisomerization of its 11-cis-retinylidene chromophore, is a prototypical member of a large family of membrane-bound proteins called G protein-coupled receptors (GPCRs). RPE65 is a retinoid isomerase critical for regeneration of the chromophore. Electron microscopy (EM) and atomic force microscopy have provided insights into how certain proteins are assembled to form much larger structures such as rod photoreceptor cell outer segment membranes. A second challenge of visual transduction is to use this knowledge to devise therapeutic approaches that can prevent or reverse conditions leading to blindness. Imaging modalities like optical coherence tomography (OCT) and scanning laser ophthalmoscopy (SLO) applied to appropriate animal models as well as human retinal imaging have been employed to characterize blinding diseases, monitor their progression, and evaluate the success of therapeutic agents. Lately two-photon (2-PO) imaging, together with biochemical assays, are revealing functional aspects of vision at a new molecular level. These multidisciplinary approaches combined with suitable animal models and inbred mutant species can be especially helpful in translating provocative cell and tissue culture findings into therapeutic options for further development in animals and eventually in humans. A host of different approaches and techniques is required for substantial progress in understanding fundamental properties of the visual system. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.
Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J
2014-08-25
The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.
Leber congenital amaurosis: from darkness to spotlight.
Kaplan, Josseline
2008-09-01
Almost 150 years ago, Theodor Leber described a severe form of vision loss at or near birth which was later given his name. During the century that followed this description, ophthalmologists dedicated efforts to give an accurate definition of the disease but patients were neglected because of the inability of physicians to provide them with treatment. In the 90s, at the time of the Golden Age of Linkage, the first LCA locus was mapped to a human chromosome and shortly after identified as the gene for guanylate cyclase. This discovery was the spark that made the disease emerge from the shadows as illustrated by the flood of LCA genes identified in the following ten-year period. During the same time period, the clinical variability of the disease was rediscovered and an unexpected physiopathological heterogeneity demonstrated. In the beginning of the third millennium, LCA came out definitively from the tunnel to shine under the bright spotlights with the RPE65 gene therapy trial that succeeded to restore vision in a dog model and opened the door to gene therapy trials in humans.
Evidence-based medicine: the value of vision screening.
Beauchamp, George R; Ellepola, Chalani; Beauchamp, Cynthia L
2010-01-01
To review the literature for evidence-based medicine (EBM), to assess the evidence for effectiveness of vision screening, and to propose moving toward value-based medicine (VBM) as a preferred basis for comparative effectiveness research. Literature based evidence is applied to five core questions concerning vision screening: (1) Is vision valuable (an inherent good)?; (2) Is screening effective (finding amblyopia)?; (3) What are the costs of screening?; (4) Is treatment effective?; and (5) Is amblyopia detection beneficial? Based on EBM literature and clinical experience, the answers to the five questions are: (1) yes; (2) based on literature, not definitively so; (3) relatively inexpensive, although some claim benefits for more expensive options such as mandatory exams; (4) yes, for compliant care, although treatment processes may have negative aspects such as "bullying"; and (5) economic productive values are likely very high, with returns of investment on the order of 10:1, while human value returns need further elucidation. Additional evidence is required to ascertain the degree to which vision screening is effective. The processes of screening are multiple, sequential, and complicated. The disease is complex, and good visual outcomes require compliance. The value of outcomes is appropriately analyzed in clinical, human, and economic terms.
Visually induced self-motion sensation adapts rapidly to left-right reversal of vision
NASA Technical Reports Server (NTRS)
Oman, C. M.; Bock, O. L.
1981-01-01
Three experiments were conducted using 15 adult volunteers with no overt oculomotor or vestibular disorders. In all experiments, left-right vision reversal was achieved using prism goggles, which permitted a binocular field of vision subtending approximately 45 deg horizontally and 28 deg vertically. In all experiments, circularvection (CV) was tested before and immediately after a period of exposure to reversed vision. After one to three hours of active movement while wearing vision-reversing goggles, 10 of 15 (stationary) human subjects viewing a moving stripe display experienced a self-rotation illusion in the same direction as seen stripe motion, rather than in the opposite (normal) direction, demonstrating that the central neural pathways that process visual self-rotation cues can undergo rapid adaptive modification.
Expanding Dimensionality in Cinema Color: Impacting Observer Metamerism through Multiprimary Display
NASA Astrophysics Data System (ADS)
Long, David L.
Television and cinema display are both trending towards greater ranges and saturation of reproduced colors made possible by near-monochromatic RGB illumination technologies. Through current broadcast and digital cinema standards work, system designs employing laser light sources, narrow-band LED, quantum dots and others are being actively endorsed in promotion of Wide Color Gamut (WCG). Despite artistic benefits brought to creative content producers, spectrally selective excitations of naturally different human color response functions exacerbate variability of observer experience. An exaggerated variation in color-sensing is explicitly counter to the exhaustive controls and calibrations employed in modern motion picture pipelines. Further, singular standard observer summaries of human color vision such as found in the CIE's 1931 and 1964 color matching functions and used extensively in motion picture color management are deficient in recognizing expected human vision variability. Many researchers have confirmed the magnitude of observer metamerism in color matching in both uniform colors and imagery but few have shown explicit color management with an aim of minimized difference in observer perception variability. This research shows that not only can observer metamerism influences be quantitatively predicted and confirmed psychophysically but that intentionally engineered multiprimary displays employing more than three primaries can offer increased color gamut with drastically improved consistency of experience. To this end, a seven-channel prototype display has been constructed based on observer metamerism models and color difference indices derived from the latest color vision demographic research. This display has been further proven in forced-choice paired comparison tests to deliver superior color matching to reference stimuli versus both contemporary standard RGB cinema projection and recently ratified standard laser projection across a large population of color-normal observers.
Pilot response to peripheral vision cues during instrument flying tasks.
DOT National Transportation Integrated Search
1968-02-01
In an attempt to more closely associate the visual aspects of instrument flying with that of contact flight, a study was made of human response to peripheral vision cues relating to aircraft roll attitude. Pilots, ranging from 52 to 12,000 flying hou...
Jian, Bo-Lin; Peng, Chao-Chung
2017-06-15
Due to the direct influence of night vision equipment availability on the safety of night-time aerial reconnaissance, maintenance needs to be carried out regularly. Unfortunately, some defects are not easy to observe or are not even detectable by human eyes. As a consequence, this study proposed a novel automatic defect detection system for aviator's night vision imaging systems AN/AVS-6(V)1 and AN/AVS-6(V)2. An auto-focusing process consisting of a sharpness calculation and a gradient-based variable step search method is applied to achieve an automatic detection system for honeycomb defects. This work also developed a test platform for sharpness measurement. It demonstrates that the honeycomb defects can be precisely recognized and the number of the defects can also be determined automatically during the inspection. Most importantly, the proposed approach significantly reduces the time consumption, as well as human assessment error during the night vision goggle inspection procedures.
A sustainable landscape ecosystem design: a case study.
Huang, Lei-Chang; Ye, Shu-Hong; Gu, Xun; Cao, Fu-Cun; Fan, Zheng-Qiu; Wang, Xiang-Rong; Wu, Ya-Sheng; Wang, Shou-Bing
2010-05-01
Landscape planning is clearly ecologically and socially relevant. Concern about sustainability between human and environment is now a driving paradigm for this professional. However, the explosion of the sustainable landscape in China is a very recent phenomenon. What is the sustainable landscape? How is this realized in practice? In this article, on the basis of the reviews of history and perplexities of Chinese landscape and nature analysis of sustainable landscape, the ecothinking model, an implemental tool for sustainable landscape, was developed, which applies ecothinking in vision, culture, conservation and development of site, and the process of public participation for a harmonious relationship between human and environment. And a case study of the south entrance of TongNiuling Scenic Area was carried out, in which the most optimum scenario was chosen from among three models according to the ecothinking model, to illustrate the construction of the ecothinking model and how to achieve a sustainable landscape.
NASA Astrophysics Data System (ADS)
Volpe, Peter A.
This thesis presents analytical models, finite element models and experimental data to investigate the response of the human eye to loads that can be experienced when in a non-supine sleeping position. The hypothesis being investigated is that non-supine sleeping positions can lead to stress, strain and deformation of the eye as well as changes in intraocular pressure (IOP) that may exacerbate vision loss in individuals who have glaucoma. To investigate the quasi-static changes in stress and internal pressure, a Fluid-Structure Interaction simulation was performed on an axisymmetrical model of an eye. Common Aerospace Engineering methods for analyzing pressure vessels and hyperelastic structural walls are applied to developing a suitable model. The quasi-static pressure increase was used in an iterative code to analyze changes in IOP over time.
Whole-eye transplantation: a look into the past and vision for the future.
Bourne, D; Li, Y; Komatsu, C; Miller, M R; Davidson, E H; He, L; Rosner, I A; Tang, H; Chen, W; Solari, M G; Schuman, J S; Washington, K M
2017-02-01
Blindness afflicts ~39 million people worldwide. Retinal ganglion cells are unable to regenerate, making this condition irreversible in many cases. Whole-eye transplantation (WET) provides the opportunity to replace diseased retinal ganglion cells, as well as the entire optical system and surrounding facial tissue, if necessary. Recent success in face transplantation demonstrates that this may be a promising treatment for what has been to this time an incurable condition. An animal model for WET must be established to further enhance our knowledge of nerve regeneration, immunosuppression, and technical aspects of surgery. A systematic review of the literature was performed to evaluate studies describing animal models for WET. Only articles in which the eye was completely enucleated and reimplanted were included. Study methods and results were compared. In the majority of published literature, WET can result in recovery of vision in cold-blooded vertebrates. There are a few instances in which mammalian WET models demonstrate survival of the transplanted tissue following neurovascular anastomosis and the ability to maintain brief electroretinogram activity in the new host. In this study we review in cold-blooded vertebrates and mammalian animal models for WET and discuss prospects for future research for translation to human eye transplantation.
Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.
Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming
2018-05-01
The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.
The Iowa Model for Pediatric Low Vision Services.
ERIC Educational Resources Information Center
Wilkinson, Mark E.; Stewart, Ian; Trantham, Carole S.
2000-01-01
This article describes the evolution of Iowa's model of low vision care for students with visual impairments. It reviews the benefits of a transdisciplinary team approach to providing low vision services for children with visual impairments, including a decrease in the number of students requiring large-print materials and related costs. (Contains…
Robust Spatial Autoregressive Modeling for Hardwood Log Inspection
Dongping Zhu; A.A. Beex
1994-01-01
We explore the application of a stochastic texture modeling method toward a machine vision system for log inspection in the forest products industry. This machine vision system uses computerized tomography (CT) imaging to locate and identify internal defects in hardwood logs. The application of CT to such industrial vision problems requires efficient and robust image...
Behavioral impairments in animal models for zinc deficiency
Hagmeyer, Simone; Haderspeck, Jasmin Carmen; Grabrucker, Andreas Martin
2015-01-01
Apart from teratogenic and pathological effects of zinc deficiency such as the occurrence of skin lesions, anorexia, growth retardation, depressed wound healing, altered immune function, impaired night vision, and alterations in taste and smell acuity, characteristic behavioral changes in animal models and human patients suffering from zinc deficiency have been observed. Given that it is estimated that about 17% of the worldwide population are at risk for zinc deficiency and that zinc deficiency is associated with a variety of brain disorders and disease states in humans, it is of major interest to investigate, how these behavioral changes will affect the individual and a putative course of a disease. Thus, here, we provide a state of the art overview about the behavioral phenotypes observed in various models of zinc deficiency, among them environmentally produced zinc deficient animals as well as animal models based on a genetic alteration of a particular zinc homeostasis gene. Finally, we compare the behavioral phenotypes to the human condition of mild to severe zinc deficiency and provide a model, how zinc deficiency that is associated with many neurodegenerative and neuropsychological disorders might modify the disease pathologies. PMID:25610379
An Optic Nerve Crush Injury Murine Model to Study Retinal Ganglion Cell Survival
Tang, Zhongshu; Zhang, Shuihua; Lee, Chunsik; Kumar, Anil; Arjunan, Pachiappan; Li, Yang; Zhang, Fan; Li, Xuri
2011-01-01
Injury to the optic nerve can lead to axonal degeneration, followed by a gradual death of retinal ganglion cells (RGCs), which results in irreversible vision loss. Examples of such diseases in human include traumatic optic neuropathy and optic nerve degeneration in glaucoma. It is characterized by typical changes in the optic nerve head, progressive optic nerve degeneration, and loss of retinal ganglion cells, if uncontrolled, leading to vision loss and blindness. The optic nerve crush (ONC) injury mouse model is an important experimental disease model for traumatic optic neuropathy, glaucoma, etc. In this model, the crush injury to the optic nerve leads to gradual retinal ganglion cells apoptosis. This disease model can be used to study the general processes and mechanisms of neuronal death and survival, which is essential for the development of therapeutic measures. In addition, pharmacological and molecular approaches can be used in this model to identify and test potential therapeutic reagents to treat different types of optic neuropathy. Here, we provide a step by step demonstration of (I) Baseline retrograde labeling of retinal ganglion cells (RGCs) at day 1, (II) Optic nerve crush injury at day 4, (III) Harvest the retinae and analyze RGC survival at day 11, and (IV) Representative result. PMID:21540827
McBride, Sebastian; Huelse, Martin; Lee, Mark
2013-01-01
Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.
NASA Technical Reports Server (NTRS)
Goldin, Daniel S.
1994-01-01
Under the administration of Dan Goldin's leadership, NASA is reinventing itself. In the process, the agency is also searching for a vision to define its role, both as a US Government agency and as a leading force in humanity's exploration of space. An adaption of Goldin's speech to the American Geophysical Union on 26 May 1994 in which he proposes one possible unifying vision is presented.
Quality metrics for sensor images
NASA Technical Reports Server (NTRS)
Ahumada, AL
1993-01-01
Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.
Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.
Sanchez, Yerly; Pinzon, David; Zheng, Bin
2017-10-01
To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.
Rebalancing binocular vision in amblyopia.
Ding, Jian; Levi, Dennis M
2014-03-01
Humans with amblyopia have an asymmetry in binocular vision: neural signals from the amblyopic eye are suppressed in the cortex by the fellow eye. The purpose of this study was to develop new models and methods for rebalancing this asymmetric binocular vision by manipulating the contrast and luminance in the two eyes. We measured the perceived phase of a cyclopean sinewave by asking normal and amblyopic observers to indicate the apparent location (phase) of the dark trough in the horizontal cyclopean sine wave relative to a black horizontal reference line, and used the same stimuli to measure perceived contrast by matching the binocular combined contrast to a standard contrast presented to one eye. We varied both the relative contrast and luminance of the two eyes' inputs, in order to rebalance the asymmetric binocular vision. Amblyopic binocular vision becomes more and more asymmetric the higher the stimulus contrast or spatial frequency. Reanalysing our previous data, we found that, at a given spatial frequency, the binocular asymmetry could be described by a log-linear formula with two parameters, one for the maximum asymmetry and one for the rate at which the binocular system becomes asymmetric as the contrast increases. Our new data demonstrates that reducing the dominant eye's mean luminance reduces its suppression of the non-dominant eye, and therefore rebalances the asymmetric binocular vision. While the binocular asymmetry in amblyopic vision can be rebalanced by manipulating the relative contrast or luminance of the two eyes at a given spatial frequency and contrast, it is very difficult or even impossible to rebalance the asymmetry for all visual conditions. Nonetheless, wearing a neutral density filter before the dominant eye (or increasing the mean luminance in the non-dominant eye) may be more beneficial than the traditional method of patching the dominant eye for treating amblyopia. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
Beyond speculative robot ethics: a vision assessment study on the future of the robotic caretaker.
van der Plas, Arjanna; Smits, Martijntje; Wehrmann, Caroline
2010-11-01
In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to some promising co-designed robot concepts in which jointly articulated moral guidelines are embedded. With our model, we think to have designed an interesting response on a recent call for a less speculative ethics of technology by encouraging discussions about the quality of positive and negative visions on the future of robotics.
The Effects of Target Orientation on the Dynamic Contrast Sensitivity Function
1994-01-01
tests: A comparative evaluation. Journal of Amlied ychology, 50, 460-466. Burg, A. (1971). Vision and driving: A report on research. Human Factors...evaluation. Journal of Applied Ps ogy, 50, 460-466. Burg, A. (1971). Vision and driving: A report on research. Human Factors, 13, 79-87. Campbell, F. W...FUNDING NUMBERS 6. AUTHOR(S)G~ioD A- 0ecoKT, ) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER AFIT Student
Continuous Human Action Recognition Using Depth-MHI-HOG and a Spotter Model
Eum, Hyukmin; Yoon, Changyong; Lee, Heejin; Park, Mignon
2015-01-01
In this paper, we propose a new method for spotting and recognizing continuous human actions using a vision sensor. The method is comprised of depth-MHI-HOG (DMH), action modeling, action spotting, and recognition. First, to effectively separate the foreground from background, we propose a method called DMH. It includes a standard structure for segmenting images and extracting features by using depth information, MHI, and HOG. Second, action modeling is performed to model various actions using extracted features. The modeling of actions is performed by creating sequences of actions through k-means clustering; these sequences constitute HMM input. Third, a method of action spotting is proposed to filter meaningless actions from continuous actions and to identify precise start and end points of actions. By employing the spotter model, the proposed method improves action recognition performance. Finally, the proposed method recognizes actions based on start and end points. We evaluate recognition performance by employing the proposed method to obtain and compare probabilities by applying input sequences in action models and the spotter model. Through various experiments, we demonstrate that the proposed method is efficient for recognizing continuous human actions in real environments. PMID:25742172
Sensory Information Processing and Symbolic Computation
1973-12-31
plague all image deblurring methods when working with high signal to noise ratios, is that of a ringing or ghost image phenomenon which surrounds high...Figure 11 The Impulse Response of an All-Pass Random Phase Filter 24 Figure 12 (a) Unsmoothed Log Spectra of the Sentence "The pipe began to...of automatic deblurring of images, linear predictive coding of speech and the refinement and application of mathematical models of human vision and
Wilderness Vision Quest: A Journey of Transformation.
ERIC Educational Resources Information Center
Brown, Michael H.
The Wilderness Vision Quest is an outdoor retreat which helps participants touch, explore, and develop important latent human resources such as imagination, intuition, creativity, inspiration, and insight. Through the careful and focused use of techniques such as deep relaxation, reflective writing, visualization, guided imagery, symbolic drawing,…
Spotting and tracking good biometrics with the human visual system
NASA Astrophysics Data System (ADS)
Szu, Harold; Jenkins, Jeffrey; Hsu, Charles
2011-06-01
We mathematically model the mammalian Visual System's (VS) capability of spotting objects. How can a hawk see a tiny running rabbit from miles above ground? How could that rabbit see the approaching hawk? This predatorprey interaction draws parallels with spotting a familiar person in a crowd. We assume that mammal eyes use peripheral vision to perceive unexpected changes from our memory, and then use our central vision (fovea) to pay attention. The difference between an image and our memory of that image is usually small, mathematically known as a 'sparse representation'. The VS communicates with the brain using a finite reservoir of neurotransmittents, which produces an on-center and thus off-surround Hubel/Wiesel Mexican hat receptive field. This is the basis of our model. This change detection mechanism could drive our attention, allowing us to hit a curveball. If we are about to hit a baseball, what information extracted by our HVS tells us where to swing? Physical human features such as faces, irises, and fingerprints have been successfully used for identification (Biometrics) for decades, recently including voice and walking style for identification from further away. Biologically, humans must use a change detection strategy to achieve an ordered sparseness and use a sigmoid threshold for noisy measurements in our Hetero-Associative Memory [HAM] classifier for fault tolerant recall. Human biometrics is dynamic, and therefore involves more than just the surface, requiring a 3 dimensional measurement (i.e. Daugman/Gabor iris features). Such a measurement can be achieved using the partial coherence of a laser's reflection from a 3-D biometric surface, creating more degrees of freedom (d.o.f.) to meet the Army's challenge of distant Biometrics. Thus, one might be able to increase the standoff loss of less distinguished degrees of freedom (DOF).
Real-time skin feature identification in a time-sequential video stream
NASA Astrophysics Data System (ADS)
Kramberger, Iztok
2005-04-01
Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.
A model of the hierarchy of behaviour, cognition, and consciousness.
Toates, Frederick
2006-03-01
Processes comparable in important respects to those underlying human conscious and non-conscious processing can be identified in a range of species and it is argued that these reflect evolutionary precursors of the human processes. A distinction is drawn between two types of processing: (1) stimulus-based and (2) higher-order. For 'higher-order,' in humans the operations of processing are themselves associated with conscious awareness. Conscious awareness sets the context for stimulus-based processing and its end-point is accessible to conscious awareness. However, the mechanics of the translation between stimulus and response proceeds without conscious control. The paper argues that higher-order processing is an evolutionary addition to stimulus-based processing. The model's value is shown for gaining insight into a range of phenomena and their link with consciousness. These include brain damage, learning, memory, development, vision, emotion, motor control, reasoning, the voluntary versus involuntary debate, and mental disorder.
A computational visual saliency model based on statistics and machine learning.
Lin, Ru-Je; Lin, Wei-Song
2014-08-01
Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.
White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J
2017-09-01
Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).
Simulation Based Acquisition for NASA's Office of Exploration Systems
NASA Technical Reports Server (NTRS)
Hale, Joe
2004-01-01
In January 2004, President George W. Bush unveiled his vision for NASA to advance U.S. scientific, security, and economic interests through a robust space exploration program. This vision includes the goal to extend human presence across the solar system, starting with a human return to the Moon no later than 2020, in preparation for human exploration of Mars and other destinations. In response to this vision, NASA has created the Office of Exploration Systems (OExS) to develop the innovative technologies, knowledge, and infrastructures to explore and support decisions about human exploration destinations, including the development of a new Crew Exploration Vehicle (CEV). Within the OExS organization, NASA is implementing Simulation Based Acquisition (SBA), a robust Modeling & Simulation (M&S) environment integrated across all acquisition phases and programs/teams, to make the realization of the President s vision more certain. Executed properly, SBA will foster better informed, timelier, and more defensible decisions throughout the acquisition life cycle. By doing so, SBA will improve the quality of NASA systems and speed their development, at less cost and risk than would otherwise be the case. SBA is a comprehensive, Enterprise-wide endeavor that necessitates an evolved culture, a revised spiral acquisition process, and an infrastructure of advanced Information Technology (IT) capabilities. SBA encompasses all project phases (from requirements analysis and concept formulation through design, manufacture, training, and operations), professional disciplines, and activities that can benefit from employing SBA capabilities. SBA capabilities include: developing and assessing system concepts and designs; planning manufacturing, assembly, transport, and launch; training crews, maintainers, launch personnel, and controllers; planning and monitoring missions; responding to emergencies by evaluating effects and exploring solutions; and communicating across the OExS enterprise, within the Government, and with the general public. The SBA process features empowered collaborative teams (including industry partners) to integrate requirements, acquisition, training, operations, and sustainment. The SBA process also utilizes an increased reliance on and investment in M&S to reduce design risk. SBA originated as a joint Industry and Department of Defense (DoD) initiative to define and integrate an acquisition process that employs robust, collaborative use of M&S technology across acquisition phases and programs. The SBA process was successfully implemented in the Air Force s Joint Strike Fighter (JSF) Program.
The Humans in Space Art Program - Engaging the Mind, and the Heart, in Science
NASA Astrophysics Data System (ADS)
McPhee, J. C.
2017-12-01
How can we do a better job communicating about space, science and technology, getting more people engaged, understanding the impact that future space exploration will have on their lives, and thinking about how they can contribute? Humans naturally express their visions and interests through various forms of artistic expression because art is inherently capable of expressing not only the "what and how" but also the "why" of ideas. Offering opportunities that integrate space, science and technology with art allows more people to learn about space, relay their visions of the future, and discuss why exploration and research are important. The Humans in Space Art Program, managed by the nonprofit SciArt Exchange, offers a science-integrated-with-art opportunity. Through international online competitions, we invite participants to share their visions of the future using visual, literary, musical and video art. We then use their artwork in multi-media displays and live performances online, locally worldwide, and in space to engage listeners and viewers. The Program has three projects, targeting different types of participants: the Youth Competition (ages 10-18), the Challenge (college and early career) and Celebrity Artist-Fed Engagement (CAFÉ: professional artists). To date, the Program has received 3400 artworks from over 52 countries and displayed the artwork in 110 multi-media events worldwide, on the International Space Station and bounced off the Moon. 100,000's have thus viewed artwork considering topics such as: why we explore; where and how we will go and when; and what we will do when we arrive. The Humans in Space Art Program is a flexible public engagement model applicable to multiple settings, including classrooms, art and entertainment events, and scientific conferences. It provides a system to accessibly inspire all ages about space, science and technology, making them hungry to learn more and to take a personal role.
Pons, Carmen; Mazade, Reece; Jin, Jianzhong; Dul, Mitchell W; Zaidi, Qasim; Alonso, Jose-Manuel
2017-12-01
Artists and astronomers noticed centuries ago that humans perceive dark features in an image differently from light ones; however, the neuronal mechanisms underlying these dark/light asymmetries remained unknown. Based on computational modeling of neuronal responses, we have previously proposed that such perceptual dark/light asymmetries originate from a luminance/response saturation within the ON retinal pathway. Consistent with this prediction, here we show that stimulus conditions that increase ON luminance/response saturation (e.g., dark backgrounds) or its effect on light stimuli (e.g., optical blur) impair the perceptual discrimination and salience of light targets more than dark targets in human vision. We also show that, in cat visual cortex, the magnitude of the ON luminance/response saturation remains relatively constant under a wide range of luminance conditions that are common indoors, and only shifts away from the lowest luminance contrasts under low mesopic light. Finally, we show that the ON luminance/response saturation affects visual salience mostly when the high spatial frequencies of the image are reduced by poor illumination or optical blur. Because both low luminance and optical blur are risk factors in myopia, our results suggest a possible neuronal mechanism linking myopia progression with the function of the ON visual pathway.
Pons, Carmen; Mazade, Reece; Jin, Jianzhong; Dul, Mitchell W.; Zaidi, Qasim; Alonso, Jose-Manuel
2017-01-01
Artists and astronomers noticed centuries ago that humans perceive dark features in an image differently from light ones; however, the neuronal mechanisms underlying these dark/light asymmetries remained unknown. Based on computational modeling of neuronal responses, we have previously proposed that such perceptual dark/light asymmetries originate from a luminance/response saturation within the ON retinal pathway. Consistent with this prediction, here we show that stimulus conditions that increase ON luminance/response saturation (e.g., dark backgrounds) or its effect on light stimuli (e.g., optical blur) impair the perceptual discrimination and salience of light targets more than dark targets in human vision. We also show that, in cat visual cortex, the magnitude of the ON luminance/response saturation remains relatively constant under a wide range of luminance conditions that are common indoors, and only shifts away from the lowest luminance contrasts under low mesopic light. Finally, we show that the ON luminance/response saturation affects visual salience mostly when the high spatial frequencies of the image are reduced by poor illumination or optical blur. Because both low luminance and optical blur are risk factors in myopia, our results suggest a possible neuronal mechanism linking myopia progression with the function of the ON visual pathway. PMID:29196762
Component-based target recognition inspired by human vision
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Agyepong, Kwabena
2009-05-01
In contrast with machine vision, human can recognize an object from complex background with great flexibility. For example, given the task of finding and circling all cars (no further information) in a picture, you may build a virtual image in mind from the task (or target) description before looking at the picture. Specifically, the virtual car image may be composed of the key components such as driver cabin and wheels. In this paper, we propose a component-based target recognition method by simulating the human recognition process. The component templates (equivalent to the virtual image in mind) of the target (car) are manually decomposed from the target feature image. Meanwhile, the edges of the testing image can be extracted by using a difference of Gaussian (DOG) model that simulates the spatiotemporal response in visual process. A phase correlation matching algorithm is then applied to match the templates with the testing edge image. If all key component templates are matched with the examining object, then this object is recognized as the target. Besides the recognition accuracy, we will also investigate if this method works with part targets (half cars). In our experiments, several natural pictures taken on streets were used to test the proposed method. The preliminary results show that the component-based recognition method is very promising.
The eyes and vision of butterflies.
Arikawa, Kentaro
2017-08-15
Butterflies use colour vision when searching for flowers. Unlike the trichromatic retinas of humans (blue, green and red cones; plus rods) and honeybees (ultraviolet, blue and green photoreceptors), butterfly retinas typically have six or more photoreceptor classes with distinct spectral sensitivities. The eyes of the Japanese yellow swallowtail (Papilio xuthus) contain ultraviolet, violet, blue, green, red and broad-band receptors, with each ommatidium housing nine photoreceptor cells in one of three fixed combinations. The Papilio eye is thus a random patchwork of three types of spectrally heterogeneous ommatidia. To determine whether Papilio use all of their receptors to see colours, we measured their ability to discriminate monochromatic lights of slightly different wavelengths. We found that Papilio can detect differences as small as 1-2 nm in three wavelength regions, rivalling human performance. We then used mathematical modelling to infer which photoreceptors are involved in wavelength discrimination. Our simulation indicated that the Papilio vision is tetrachromatic, employing the ultraviolet, blue, green and red receptors. The random array of three ommatidial types is a common feature in butterflies. To address the question of how the spectrally complex eyes of butterflies evolved, we studied their developmental process. We have found that the development of butterfly eyes shares its molecular logic with that of Drosophila: the three-way stochastic expression pattern of the transcription factor Spineless determines the fate of ommatidia, creating the random array in Papilio. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.
Nonhuman Primate Studies to Advance Vision Science and Prevent Blindness.
Mustari, Michael J
2017-12-01
Most primate behavior is dependent on high acuity vision. Optimal visual performance in primates depends heavily upon frontally placed eyes, retinal specializations, and binocular vision. To see an object clearly its image must be placed on or near the fovea of each eye. The oculomotor system is responsible for maintaining precise eye alignment during fixation and generating eye movements to track moving targets. The visual system of nonhuman primates has a similar anatomical organization and functional capability to that of humans. This allows results obtained in nonhuman primates to be applied to humans. The visual and oculomotor systems of primates are immature at birth and sensitive to the quality of binocular visual and eye movement experience during the first months of life. Disruption of postnatal experience can lead to problems in eye alignment (strabismus), amblyopia, unsteady gaze (nystagmus), and defective eye movements. Recent studies in nonhuman primates have begun to discover the neural mechanisms associated with these conditions. In addition, genetic defects that target the retina can lead to blindness. A variety of approaches including gene therapy, stem cell treatment, neuroprosthetics, and optogenetics are currently being used to restore function associated with retinal diseases. Nonhuman primates often provide the best animal model for advancing fundamental knowledge and developing new treatments and cures for blinding diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of the National Academy of Sciences. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Two-Year Community: Implementing Vision and Change in a Community College Classroom
ERIC Educational Resources Information Center
Lysne, Steven; Miller, Brant
2015-01-01
The purpose of this article is to describe a model for teaching introductory biology coursework within the Vision and Change framework (American Association for the Advancement of Science, 2011). The intent of the new model is to transform instruction by adopting an active, student-centered, and inquiry-based pedagogy consistent with Vision and…
Maclean, Carla L; Brimacombe, C A Elizabeth; Lindsay, D Stephen
2013-12-01
The current study addressed tunnel vision in industrial incident investigation by experimentally testing how a priori information and a human bias (generated via the fundamental attribution error or correspondence bias) affected participants' investigative behavior as well as the effectiveness of a debiasing intervention. Undergraduates and professional investigators engaged in a simulated industrial investigation exercise. We found that participants' judgments were biased by knowledge about the safety history of either a worker or piece of equipment and that a human bias was evident in participants' decision making. However, bias was successfully reduced with "tunnel vision education." Professional investigators demonstrated a greater sophistication in their investigative decision making compared to undergraduates. The similarities and differences between these two populations are discussed. (c) 2013 APA, all rights reserved
The contribution of stereo vision to the control of braking.
Tijtgat, Pieter; Mazyn, Liesbeth; De Laey, Christophe; Lenoir, Matthieu
2008-03-01
In this study the contribution of stereo vision to the control of braking in front of a stationary target vehicle was investigated. Participants with normal (StereoN) and weak (StereoW) stereo vision drove a go-cart along a linear track towards a stationary vehicle. They could start braking from a distance of 4, 7, or 10m from the vehicle. Deceleration patterns were measured by means of a laser. A lack of stereo vision was associated with an earlier onset of braking, but the duration of the braking manoeuvre was similar. During the deceleration, the time of peak deceleration occurred earlier in drivers with weak stereo vision. Stopping distance was greater in those lacking in stereo vision. A lack of stereo vision was associated with a more prudent brake behaviour, in which the driver took into account a larger safety margin. This compensation might be caused either by an unconscious adaptation of the human perceptuo-motor system, or by a systematic underestimation of distance remaining due to the lack of stereo vision. In general, a lack of stereo vision did not seem to increase the risk of rear-end collisions.
Deep Neural Networks as a Computational Model for Human Shape Sensitivity
Op de Beeck, Hans P.
2016-01-01
Theories of object recognition agree that shape is of primordial importance, but there is no consensus about how shape might be represented, and so far attempts to implement a model of shape perception that would work with realistic stimuli have largely failed. Recent studies suggest that state-of-the-art convolutional ‘deep’ neural networks (DNNs) capture important aspects of human object perception. We hypothesized that these successes might be partially related to a human-like representation of object shape. Here we demonstrate that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. We show that these models explain human shape judgments for several benchmark behavioral and neural stimulus sets on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments. As a whole, these results indicate that convolutional neural networks not only learn physically correct representations of object categories but also develop perceptually accurate representational spaces of shapes. An even more complete model of human object representations might be in sight by training deep architectures for multiple tasks, which is so characteristic in human development. PMID:27124699
Human Pose Estimation from Monocular Images: A Comprehensive Survey
Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi
2016-01-01
Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003
Mobile Autonomous Humanoid Assistant
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.
2004-01-01
A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
Vision Effects: A Critical Gap in Educational Leadership Research
ERIC Educational Resources Information Center
Kantabutra, Sooksan
2010-01-01
Purpose: Although leaders are widely believed to employ visions, little is known about what constitutes an "effective" vision, particularly in the higher education sector. This paper seeks to proposes a research model for examining relationships between vision components and performance of higher education institutions, as measured by financial…
A Model for Integrating Low Vision Services into Educational Programs.
ERIC Educational Resources Information Center
Jose, Randall T.; And Others
1988-01-01
A project integrating low-vision services into children's educational programs comprised four components: teacher training, functional vision evaluations for each child, a clinical examination by an optometrist, and follow-up visits with the optometrist to evaluate the prescribed low-vision aids. Educational implications of the project and project…
Emotion-Induced Trade-Offs in Spatiotemporal Vision
ERIC Educational Resources Information Center
Bocanegra, Bruno R.; Zeelenberg, Rene
2011-01-01
It is generally assumed that emotion facilitates human vision in order to promote adaptive responses to a potential threat in the environment. Surprisingly, we recently found that emotion in some cases impairs the perception of elementary visual features (Bocanegra & Zeelenberg, 2009b). Here, we demonstrate that emotion improves fast temporal…
Times Past, Times to Come: The Influence of the Past on Visions of the Future.
ERIC Educational Resources Information Center
Masson, Sophie
1997-01-01
Discussion of visions of the future that are based on past history highlights imaginative literature that deals with the human spirit rather than strictly technological innovations. Medieval society, the Roman Empire, mythological atmospheres, and the role of writers are also discussed. (LRW)
Institutional Vision and Academic Advising
ERIC Educational Resources Information Center
Abelman, Robert; Molina, Anthony D.
2006-01-01
Quality academic advising in higher education is the product of a multitude of elements not the least of which is institutional vision. By recognizing and embracing an institution's concept of its capabilities and the kinds of educated human beings it is attempting to cultivate, advisors gain an invaluable apparatus to guide the provision of…
75 FR 10808 - CIBA Vision Corp.; Withdrawal of Color Additive Petitions
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-09
...) CIBA Vision Corp.; Withdrawal of Color Additive Petitions AGENCY: Food and Drug Administration, HHS... Director, Office of Food Additive Safety, Center for Food Safety and Applied Nutrition. [FR Doc. 2010-4972... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket Nos. FDA-2005-C-0245...
Teaching for Humanity in a Neoliberal World: Visions of Education in Serbia
ERIC Educational Resources Information Center
Dull, Laura J.
2012-01-01
In Serbia, teachers and policy makers express different and sometimes competing visions of education. Teachers express their desire to "awaken" students by using progressive pedagogies, while European Union and World Bank reformers appropriate progressive education in the service of neoliberal goals. The research findings presented here…
Photopic transduction implicated in human circadian entrainment
NASA Technical Reports Server (NTRS)
Zeitzer, J. M.; Kronauer, R. E.; Czeisler, C. A.
1997-01-01
Despite the preeminence of light as the synchronizer of the circadian timing system, the phototransductive machinery in mammals which transmits photic information from the retina to the hypothalamic circadian pacemaker remains largely undefined. To determine the class of photopigments which this phototransductive system uses, we exposed a group (n = 7) of human subjects to red light below the sensitivity threshold of a scotopic (i.e. rhodopsin/rod-based) system, yet of sufficient strength to activate a photopic (i.e. cone-based) system. Exposure to this light stimulus was sufficient to reset significantly the human circadian pacemaker, indicating that the cone pigments which mediate color vision can also mediate circadian vision.
NASA Astrophysics Data System (ADS)
Juday, Richard D.; Loshin, David S.
1989-06-01
We are investigating image coordinate transformations possibly to be used in a low vision aid for human patients. These patients typically have field defects with localized retinal dysfunction predominately central (age related maculopathy) or peripheral (retinitis pigmentosa). Previously we have shown simple eccentricity-only remappings which do not maintain conformality. In this report we present our initial attempts on developing images which hold quasi-conformality after remapping. Although the quasi-conformal images may have less local distortion, there are discontinuities in the image which may counterindicate this type of transformation for the low vision application.
Computer vision in cell biology.
Danuser, Gaudenz
2011-11-23
Computer vision refers to the theory and implementation of artificial systems that extract information from images to understand their content. Although computers are widely used by cell biologists for visualization and measurement, interpretation of image content, i.e., the selection of events worth observing and the definition of what they mean in terms of cellular mechanisms, is mostly left to human intuition. This Essay attempts to outline roles computer vision may play and should play in image-based studies of cellular life. Copyright © 2011 Elsevier Inc. All rights reserved.
Liquid lens: advances in adaptive optics
NASA Astrophysics Data System (ADS)
Casey, Shawn Patrick
2010-12-01
'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.
Deep Learning for Computer Vision: A Brief Review
Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios
2018-01-01
Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619
Evidence against global attention filters selective for absolute bar-orientation in human vision.
Inverso, Matthew; Sun, Peng; Chubb, Charles; Wright, Charles E; Sperling, George
2016-01-01
The finding that an item of type A pops out from an array of distractors of type B typically is taken to support the inference that human vision contains a neural mechanism that is activated by items of type A but not by items of type B. Such a mechanism might be expected to yield a neural image in which items of type A produce high activation and items of type B low (or zero) activation. Access to such a neural image might further be expected to enable accurate estimation of the centroid of an ensemble of items of type A intermixed with to-be-ignored items of type B. Here, it is shown that as the number of items in stimulus displays is increased, performance in estimating the centroids of horizontal (vertical) items amid vertical (horizontal) distractors degrades much more quickly and dramatically than does performance in estimating the centroids of white (black) items among black (white) distractors. Together with previous findings, these results suggest that, although human vision does possess bottom-up neural mechanisms sensitive to abrupt local changes in bar-orientation, and although human vision does possess and utilize top-down global attention filters capable of selecting multiple items of one brightness or of one color from among others, it cannot use a top-down global attention filter capable of selecting multiple bars of a given absolute orientation and filtering bars of the opposite orientation in a centroid task.
Computational approaches to vision
NASA Technical Reports Server (NTRS)
Barrow, H. G.; Tenenbaum, J. M.
1986-01-01
Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.
Human activity discrimination for maritime application
NASA Astrophysics Data System (ADS)
Boettcher, Evelyn; Deaver, Dawne M.; Krapels, Keith
2008-04-01
The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is investigating how motion affects the target acquisition model (NVThermIP) sensor performance estimates. This paper looks specifically at estimating sensor performance for the task of discriminating human activities on watercraft, and was sponsored by the Office of Naval Research (ONR). Traditionally, sensor models were calibrated using still images. While that approach is sufficient for static targets, video allows one to use motion cues to aid in discerning the type of human activity more quickly and accurately. This, in turn, will affect estimated sensor performance and these effects are measured in order to calibrate current target acquisition models for this task. The study employed an eleven alternative forced choice (11AFC) human perception experiment to measure the task difficulty of discriminating unique human activities on watercrafts. A mid-wave infrared camera was used to collect video at night. A description of the construction of this experiment is given, including: the data collection, image processing, perception testing and how contrast was defined for video. These results are applicable to evaluate sensor field performance for Anti-Terrorism and Force Protection (AT/FP) tasks for the U.S. Navy.
Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction
Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J.
2014-01-01
The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546
Transforming revenue management.
Silveria, Richard; Alliegro, Debra; Nudd, Steven
2008-11-01
Healthcare organizations that want to undertake a patient administrative/revenue management transformation should: Define the vision with underlying business objectives and key performance measures. Strategically partner with key vendors for business process development and technology design. Create a program organization and governance infrastructure. Develop a corporate design model that defines the standards for operationalizing the vision. Execute the vision through technology deployment and corporate design model implementation.
A real time study of the human equilibrium using an instrumented insole with 3 pressure sensors.
Abou Ghaida, Hussein; Mottet, Serge; Goujon, Jean-Marc
2014-01-01
The present work deals with the study of the human equilibrium using an ambulatory e-health system. One of the point on which we focus is the fall risk, when losing equilibrium control. A specific postural learning model is presented, and an ambulatory instrumented insole is developed using 3 pressures sensors per foot, in order to determine the real-time displacement and the velocity of the centre of pressure (CoP). The increase of these parameters signals a loss of physiological sensation, usually of vision or of the inner ear. The results are compared to those obtained from classical more complex systems.
How much crosstalk can be allowed in a stereoscopic system at various grey levels?
NASA Astrophysics Data System (ADS)
Shestak, Sergey; Kim, Daesik; Kim, Yongie
2012-03-01
We have calculated a perceptual threshold of stereoscopic crosstalk on the basis of mathematical model of human vision sensitivity. Instead of linear model of just noticeable difference (JND) known as Weber's law we applied nonlinear Barten's model. The predicted crosstalk threshold varies with the background luminance. The calculated values of threshold are in a reasonable agreement with known experimental data. We calculated perceptual threshold of crosstalk for various combinations of the applied grey level. This result can be applied for the assessment of grey-to-grey crosstalk compensation. Further computational analysis of the applied model predicts the increase of the displayable image contrast with reduction of the maximum displayable luminance.
Modeling the target acquisition performance of active imaging systems
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Jacobs, Eddie L.; Halford, Carl E.; Vollmerhausen, Richard; Tofsted, David H.
2007-04-01
Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.
Time limited field of regard search
NASA Astrophysics Data System (ADS)
Flug, Eric; Maurer, Tana; Nguyen, Oanh-Tho
2005-05-01
Recent work by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has led to the Time-Limited Search (TLS) model, which has given new formulations for the field of view (FOV) search times. The next step in the evaluation of the overall search model (ACQUIRE) is to apply these parameters to the field of regard (FOR) model. Human perception experiments were conducted using synthetic imagery developed at NVESD. The experiments were competitive player-on-player search tests with the intention of imposing realistic time constraints on the observers. FOR detection probabilities, search times, and false alarm data are analyzed and compared to predictions using both the TLS model and ACQUIRE.
Modeling the target acquisition performance of active imaging systems.
Espinola, Richard L; Jacobs, Eddie L; Halford, Carl E; Vollmerhausen, Richard; Tofsted, David H
2007-04-02
Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.
Blur Clarified: A review and Synthesis of Blur Discrimination
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J.
2011-01-01
Blur is an important attribute of human spatial vision, and sensitivity to blur has been the subject of considerable experimental research and theoretical modeling. Often these models have invoked specialized concepts or mechanisms, such as intrinsic blur, multiple channels, or blur estimation units. In this paper we review the several experimental studies of blur discrimination and find they are in broad empirical agreement. But contrary to previous modeling efforts, we find that the essential features of blur discrimination are fully accounted for by a visible contrast energy model (ViCE), in which two spatial patterns are distinguished when the integrated difference between their masked local contrast energy responses reaches a threshold value.
An approach to integrate the human vision psychology and perception knowledge into image enhancement
NASA Astrophysics Data System (ADS)
Wang, Hui; Huang, Xifeng; Ping, Jiang
2009-07-01
Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.
Basic design principles of colorimetric vision systems
NASA Astrophysics Data System (ADS)
Mumzhiu, Alex M.
1998-10-01
Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.
Color indirect effects on melatonin regulation
NASA Astrophysics Data System (ADS)
Mian, Tian; Liu, Timon C.; Li, Yan
2002-04-01
Color indirect effect (CIE) is referred to as the physiological and psychological effects of color resulting from color vision. In previous papers, we have studied CIE from the viewpoints of the integrated western and Chinese traditional medicine, put forward the color-autonomic- nervous-subsystem model (CAM), and provided its time-theory foundation. In this paper, we applied it to study light effects on melatonin regulation in humans, and suggested that it is CIE that mediates light effects on melatonin suppression.
The NASA Aviation Safety Program: Overview
NASA Technical Reports Server (NTRS)
Shin, Jaiwon
2000-01-01
In 1997, the United States set a national goal to reduce the fatal accident rate for aviation by 80% within ten years based on the recommendations by the Presidential Commission on Aviation Safety and Security. Achieving this goal will require the combined efforts of government, industry, and academia in the areas of technology research and development, implementation, and operations. To respond to the national goal, the National Aeronautics and Space Administration (NASA) has developed a program that will focus resources over a five year period on performing research and developing technologies that will enable improvements in many areas of aviation safety. The NASA Aviation Safety Program (AvSP) is organized into six research areas: Aviation System Modeling and Monitoring, System Wide Accident Prevention, Single Aircraft Accident Prevention, Weather Accident Prevention, Accident Mitigation, and Synthetic Vision. Specific project areas include Turbulence Detection and Mitigation, Aviation Weather Information, Weather Information Communications, Propulsion Systems Health Management, Control Upset Management, Human Error Modeling, Maintenance Human Factors, Fire Prevention, and Synthetic Vision Systems for Commercial, Business, and General Aviation aircraft. Research will be performed at all four NASA aeronautics centers and will be closely coordinated with Federal Aviation Administration (FAA) and other government agencies, industry, academia, as well as the aviation user community. This paper provides an overview of the NASA Aviation Safety Program goals, structure, and integration with the rest of the aviation community.
Foveal analysis and peripheral selection during active visual sampling
Ludwig, Casimir J. H.; Davies, J. Rhys; Eckstein, Miguel P.
2014-01-01
Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently. PMID:24385588
Chiang, Peggy Pei-Chia; Xie, Jing; Keeffe, Jill Elizabeth
2011-04-25
To identify the critical success factors (CSF) associated with coverage of low vision services. Data were collected from a survey distributed to Vision 2020 contacts, government, and non-government organizations (NGOs) in 195 countries. The Classification and Regression Tree Analysis (CART) was used to identify the critical success factors of low vision service coverage. Independent variables were sourced from the survey: policies, epidemiology, provision of services, equipment and infrastructure, barriers to services, human resources, and monitoring and evaluation. Socioeconomic and demographic independent variables: health expenditure, population statistics, development status, and human resources in general, were sourced from the World Health Organization (WHO), World Bank, and the United Nations (UN). The findings identified that having >50% of children obtaining devices when prescribed (χ(2) = 44; P < 0.000), multidisciplinary care (χ(2) = 14.54; P = 0.002), >3 rehabilitation workers per 10 million of population (χ(2) = 4.50; P = 0.034), higher percentage of population urbanized (χ(2) = 14.54; P = 0.002), a level of private investment (χ(2) = 14.55; P = 0.015), and being fully funded by government (χ(2) = 6.02; P = 0.014), are critical success factors associated with coverage of low vision services. This study identified the most important predictors for countries with better low vision coverage. The CART is a useful and suitable methodology in survey research and is a novel way to simplify a complex global public health issue in eye care.
From spectral information to animal colour vision: experiments and concepts.
Kelber, Almut; Osorio, Daniel
2010-06-07
Many animals use the spectral distribution of light to guide behaviour, but whether they have colour vision has been debated for over a century. Our strong subjective experience of colour and the fact that human vision is the paradigm for colour science inevitably raises the question of how we compare with other species. This article outlines four grades of 'colour vision' that can be related to the behavioural uses of spectral information, and perhaps to the underlying mechanisms. In the first, even without an (image-forming) eye, simple organisms can compare photoreceptor signals to locate a desired light environment. At the next grade, chromatic mechanisms along with spatial vision guide innate preferences for objects such as food or mates; this is sometimes described as wavelength-specific behaviour. Here, we compare the capabilities of di- and trichromatic vision, and ask why some animals have more than three spectral types of receptors. Behaviours guided by innate preferences are then distinguished from a grade that allows learning, in part because the ability to learn an arbitrary colour is evidence for a neural representation of colour. The fourth grade concerns colour appearance rather than colour difference: for instance, the distinction between hue and saturation, and colour categorization. These higher-level phenomena are essential to human colour perception but poorly known in animals, and we suggest how they can be studied. Finally, we observe that awareness of colour and colour qualia cannot be easily tested in animals.
NASA Astrophysics Data System (ADS)
Traxler, Lukas; Reutterer, Bernd; Bayer, Natascha; Drauschke, Andreas
2017-04-01
To treat cataract intraocular lenses (IOLs) are used to replace the clouded human eye lens. Due to postoperative healing processes the IOL can displace within the eye, which can lead to deteriorated quality of vision. To test and characterize these effect an IOL can be embedded into a model of the humane eye. One informative measure are wavefront aberrations. In this paper three different setups, the typical double-pass configuration (DP), a single-pass (SP1) where the measured light travels in the same direction as in DP and a single-pass (SP2) with reversed direction, are investigated. All three setups correctly measure the aberrations of the eye, where SP1 is found to be the simplest to set up and align. Because of the lowest complexity it is the proposed method for wavefront measurement in model eyes.
Human vision is attuned to the diffuseness of natural light
Morgenstern, Yaniv; Geisler, Wilson S.; Murray, Richard F.
2014-01-01
All images are highly ambiguous, and to perceive 3-D scenes, the human visual system relies on assumptions about what lighting conditions are most probable. Here we show that human observers' assumptions about lighting diffuseness are well matched to the diffuseness of lighting in real-world scenes. We use a novel multidirectional photometer to measure lighting in hundreds of environments, and we find that the diffuseness of natural lighting falls in the same range as previous psychophysical estimates of the visual system's assumptions about diffuseness. We also find that natural lighting is typically directional enough to override human observers' assumption that light comes from above. Furthermore, we find that, although human performance on some tasks is worse in diffuse light, this can be largely accounted for by intrinsic task difficulty. These findings suggest that human vision is attuned to the diffuseness levels of natural lighting conditions. PMID:25139864
Visions, Strategic Planning, and Quality--More than Hype.
ERIC Educational Resources Information Center
Kaufman, Roger
1996-01-01
Discusses the need to shift from the old models for organizational development to the new methods of quality management and continuous improvement, visions and visioning, and strategic planning, despite inappropriate criticisms they receive. (AEF)
Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe
2013-06-01
Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Playing to Your Strengths: Appreciative Inquiry in the Visioning Process
ERIC Educational Resources Information Center
Fifolt, Matthew; Stowe, Angela M.
2011-01-01
"Appreciative Inquiry" (AI) is a structured approach to visioning focused on reflection, introspection, and collaboration. Rooted in organizational behavior theory, AI was introduced in the early 1980s as a life-centric approach to human systems (Watkins and Mohr 2001). Since then, AI has been used widely within the business community;…
A New Vision for HRD to Improve Organizational Results
ERIC Educational Resources Information Center
Budd, Marjorie L.; Hannum, Wallace H.
2016-01-01
The intent of this article is to offer guidance based on experience and evidence that can enable organizations to be more effective in preparing their workforce. The authors believe organizations need a new vision, a new way to conceptualize the role of HRD (Human Resource Development) that functions within the organization, responsible for the…
Prophet of the Storm: Richard Wright and the Radical Tradition
ERIC Educational Resources Information Center
Campbell, Finley C.
1977-01-01
Asserts that Wright's importance emerges out of his literary identification with a specific sociological vision of the life and destiny of black people in particular and human kind in general. Examines this vision interpretatively as a philosophy, as a historical trend in American culture, and as a personal and ideological commitment forming the…
76 FR 37690 - CooperVision, Inc.; Filing of Color Additive Petitions
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-28
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 73 [Docket Nos. FDA-2011-C-0344 and FDA-2011-C-0463] CooperVision, Inc.; Filing of Color Additive Petitions AGENCY... additive regulations be amended to provide for the safe use of 1,4- bis[4-(2-methacryloxyethyl)phenlyamino...
76 FR 49707 - CooperVision, Inc.; Filing of Color Additive Petitions
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-11
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 73 [Docket Nos. FDA-2011-C-0344 and FDA-2011-C-0463] CooperVision, Inc.; Filing of Color Additive Petitions Correction In proposed rule document 2011-16089 appearing on page 37690 in the issue of Tuesday, June 28, 2011...
Two-Eyed Seeing into Environmental Education: Revealing Its "Natural" Readiness to Indigenize
ERIC Educational Resources Information Center
McKeon, Margaret
2012-01-01
Recent visions for environmental education now include a foundational acknowledgement that the well-being of humans and the environment are inseparable. This vision of environmental education, with a focus on interconnectedness as well as concepts of transformation, holism, caring, and responsibility, rooted in experiences of nature, community,…
ERIC Educational Resources Information Center
Gidley, Jennifer M.
2007-01-01
Rudolf Steiner and Ken Wilber claim that human consciousness is evolving beyond the "formal", abstract, intellectual mode toward a "post-formal", integral mode. Wilber calls this "vision-logic" and Steiner calls it "consciousness/spiritual soul". Both point to the emergence of more complex, dialectical,…
78 FR 66027 - Center for Scientific Review; Amended Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-04
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Center for Scientific Review; Amended Notice of Meeting Notice is hereby given of a change in the meeting of the Bioengineering of Neuroscience, Vision and Low Vision Technologies Study Section, October 03, 2013, 08:00 a.m. to October 04...
[The Performance Analysis for Lighting Sources in Highway Tunnel Based on Visual Function].
Yang, Yong; Han, Wen-yuan; Yan, Ming; Jiang, Hai-feng; Zhu, Li-wei
2015-10-01
Under the condition of mesopic vision, the spectral luminous efficiency function is shown as a series of curves. Its peak wavelength and intensity are affected by light spectrum, background brightness and other aspects. The impact of light source to lighting visibility could not be carried out via a single optical parametric characterization. The reaction time of visual cognition is regard as evaluating indexes in this experiment. Under the condition of different speed and luminous environment, testing visual cognition based on vision function method. The light sources include high pressure sodium, electrodeless fluorescent lamp and white LED with three kinds of color temperature (the range of color temperature is from 1 958 to 5 537 K). The background brightness value is used for basic section of highway tunnel illumination and general outdoor illumination, its range is between 1 and 5 cd x m(-)2. All values are in the scope of mesopic vision. Test results show that: under the same condition of speed and luminance, the reaction time of visual cognition that corresponding to high color temperature of light source is shorter than it corresponding to low color temperature; the reaction time corresponding to visual target in high speed is shorter than it in low speed. At the end moment, however, the visual angle of target in observer's visual field that corresponding to low speed was larger than it corresponding to high speed. Based on MOVE model, calculating the equivalent luminance of human mesopic vision, which is on condition of different emission spectrum and background brightness that formed by test lighting sources. Compared with photopic vision result, the standard deviation (CV) of time-reaction curve corresponding to equivalent brightness of mesopic vision is smaller. Under the condition of mesopic vision, the discrepancy between equivalent brightness of different lighting source and photopic vision, that is one of the main reasons for causing the discrepancy of visual recognition. The emission spectrum peak of GaN chip is approximate to the wave length peak of efficiency function in photopic vision. The lighting visual effect of write LED in high color temperature is better than it in low color temperature and electrodeless fluorescent lamp. The lighting visual effect of high pressure sodium is weak. Because of its peak value is around the Na+ characteristic spectra.
Cocchioni, M; Scuri, S; Morichetti, L; Petrelli, F; Grappasonni, I
2006-01-01
The article underlines the fundamental importance of the protection and promotion of environmental quality for the human health. The evolution of fluvial monitoring techniques is contemplated from chemical and bacteriological analysis until the Index Functional Index (I.F.F). This evolution it's very important because shows a new methodological and cultural maturation that has carried from a anthropocentric vision until an ecocentric vision. The target of this ecological vision is the re-establishment of ecological functionality of the rivers, eliminating the consumer's vision of the water considered only as a usable resource. The importance of an correct monitoring of a river is confirmed, even though the preventive approach priority remains.
Bioelectronic retinal prosthesis
NASA Astrophysics Data System (ADS)
Weiland, James D.
2016-05-01
Retinal prosthesis have been translated to clinical use over the past two decades. Currently, two devices have regulatory approval for the treatment of retinitis pigmentosa and one device is in clinical trials for treatment of age-related macular degeneration. These devices provide partial sight restoration and patients use this improved vision in their everyday lives to navigate and to detect large objects. However, significant vision restoration will require both better technology and improved understanding of the interaction between electrical stimulation and the retina. In particular, current retinal prostheses do not provide peripheral visions due to technical and surgical limitations, thus limiting the effectiveness of the treatment. This paper reviews recent results from human implant patients and presents technical approaches for peripheral vision.
Structuring the collaboration of science and service in pursuit of a shared vision.
Chorpita, Bruce F; Daleiden, Eric L
2014-01-01
The enduring needs of our society highlight the importance of a shared vision to improve human functioning and yield better lives for families and communities. Science offers a powerful strategy for managing the inevitable uncertainty in pursuit of these goals. This article presents ideas and examples of methods that could preserve the strengths of the two major paradigms in children's mental health, evidence-based treatments and individualized care models, but that also have the potential to extend their applicability and impact. As exemplified in some of the articles throughout this issue, new models to connect science and service will likely emerge from novel consideration of better ways to structure and inform collaboration within mental health systems. We contend that the future models for effective systems will involve increased attention to (a) client and provider developmental pathways, (b) explicit frameworks for coordinating people and the knowledge and other resources they use, and (c) a balance of evidence-based planning and informed adaptation. We encourage the diverse community of scientists, providers, and administrators in our field to come together to enhance our collective wisdom through consideration of and reflection on these concepts and their illustrations.
Adaptive Optics for the Human Eye
NASA Astrophysics Data System (ADS)
Williams, D. R.
2000-05-01
Adaptive optics can extend not only the resolution of ground-based telescopes, but also the human eye. Both static and dynamic aberrations in the cornea and lens of the normal eye limit its optical quality. Though it is possible to correct defocus and astigmatism with spectacle lenses, higher order aberrations remain. These aberrations blur vision and prevent us from seeing at the fundamental limits set by the retina and brain. They also limit the resolution of cameras to image the living retina, cameras that are a critical for the diagnosis and treatment of retinal disease. I will describe an adaptive optics system that measures the wave aberration of the eye in real time and compensates for it with a deformable mirror, endowing the human eye with unprecedented optical quality. This instrument provides fresh insight into the ultimate limits on human visual acuity, reveals for the first time images of the retinal cone mosaic responsible for color vision, and points the way to contact lenses and laser surgical methods that could enhance vision beyond what is currently possible today. Supported by the NSF Science and Technology Center for Adaptive Optics, the National Eye Institute, and Bausch and Lomb, Inc.
NASA Astrophysics Data System (ADS)
Koenderink, Jan
2011-03-01
The egg-rolling behavior of the graylag goose is an often quoted example of a fixed-action pattern. The bird will even attempt to roll a brick back to its nest! Despite excellent visual acuity it apparently takes a brick for an egg." Evolution optimizes utility, not veridicality. Yet textbooks take it for a fact that human vision evolved so as to approach veridical perception. How do humans manage to dodge the laws of evolution? I will show that they don't, but that human vision is an idiosyncratic user interface. By way of an example I consider the case of pictorial perception. Gleaning information from still images is an important human ability and is likely to remain so for the foreseeable future. I will discuss a number of instances of extreme non-veridicality and huge inter-observer variability. Despite their importance in applications (information dissemination, personnel selection,...) such huge effects have remained undocumented in the literature, although they can be traced to artistic conventions. The reason appears to be that conventional psychophysics-by design-fails to address the qualitative, that is the meaningful, aspects of visual awareness whereas this is the very target of the visual arts.
An optimal state estimation model of sensory integration in human postural balance
NASA Astrophysics Data System (ADS)
Kuo, Arthur D.
2005-09-01
We propose a model for human postural balance, combining state feedback control with optimal state estimation. State estimation uses an internal model of body and sensor dynamics to process sensor information and determine body orientation. Three sensory modalities are modeled: joint proprioception, vestibular organs in the inner ear, and vision. These are mated with a two degree-of-freedom model of body dynamics in the sagittal plane. Linear quadratic optimal control is used to design state feedback and estimation gains. Nine free parameters define the control objective and the signal-to-noise ratios of the sensors. The model predicts statistical properties of human sway in terms of covariance of ankle and hip motion. These predictions are compared with normal human responses to alterations in sensory conditions. With a single parameter set, the model successfully reproduces the general nature of postural motion as a function of sensory environment. Parameter variations reveal that the model is highly robust under normal sensory conditions, but not when two or more sensors are inaccurate. This behavior is similar to that of normal human subjects. We propose that age-related sensory changes may be modeled with decreased signal-to-noise ratios, and compare the model's behavior with degraded sensors against experimental measurements from older adults. We also examine removal of the model's vestibular sense, which leads to instability similar to that observed in bilateral vestibular loss subjects. The model may be useful for predicting which sensors are most critical for balance, and how much they can deteriorate before posture becomes unstable.
Project Magnify: Increasing Reading Skills in Students with Low Vision
ERIC Educational Resources Information Center
Farmer, Jeanie; Morse, Stephen E.
2007-01-01
Modeled after Project PAVE (Corn et al., 2003) in Tennessee, Project Magnify is designed to test the idea that students with low vision who use individually prescribed magnification devices for reading will perform as well as or better than students with low vision who use large-print reading materials. Sixteen students with low vision were…
Development of embedded real-time and high-speed vision platform
NASA Astrophysics Data System (ADS)
Ouyang, Zhenxing; Dong, Yimin; Yang, Hua
2015-12-01
Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.
Assessment of OLED displays for vision research
Cooper, Emily A.; Jiang, Haomiao; Vildavski, Vladimir; Farrell, Joyce E.; Norcia, Anthony M.
2013-01-01
Vision researchers rely on visual display technology for the presentation of stimuli to human and nonhuman observers. Verifying that the desired and displayed visual patterns match along dimensions such as luminance, spectrum, and spatial and temporal frequency is an essential part of developing controlled experiments. With cathode-ray tubes (CRTs) becoming virtually unavailable on the commercial market, it is useful to determine the characteristics of newly available displays based on organic light emitting diode (OLED) panels to determine how well they may serve to produce visual stimuli. This report describes a series of measurements summarizing the properties of images displayed on two commercially available OLED displays: the Sony Trimaster EL BVM-F250 and PVM-2541. The results show that the OLED displays have large contrast ratios, wide color gamuts, and precise, well-behaved temporal responses. Correct adjustment of the settings on both models produced luminance nonlinearities that were well predicted by a power function (“gamma correction”). Both displays have adjustable pixel independence and can be set to have little to no spatial pixel interactions. OLED displays appear to be a suitable, or even preferable, option for many vision research applications. PMID:24155345
Panoramic stereo sphere vision
NASA Astrophysics Data System (ADS)
Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian
2013-01-01
Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.
Development of an In Flight Vision Self-Assessment Questionnaire for Long Duration Space Missions
NASA Technical Reports Server (NTRS)
Byrne, Vicky E.; Gibson, Charles R.; Pierpoline, Katherine M.
2010-01-01
OVERVIEW A NASA Flight Medicine optometrist teamed with a human factors specialist to develop an electronic questionnaire for crewmembers to record their visual acuity test scores and perceived vision assessment. It will be implemented on the International Space Station (ISS) and administered as part of a suite of tools for early detection of potential vision changes. The goal of this effort was to rapidly develop a set of questions to help in early detection of visual (e.g. blurred vision) and/or non-visual (e.g. headaches) symptoms by allowing the ISS crewmembers to think about their own current vision during their spaceflight missions. PROCESS An iterative process began with a Space Shuttle one-page paper questionnaire generated by the optometrist that was updated by applying human factors design principles. It was used as a baseline to establish an electronic questionnaire for ISS missions. Additional questions needed for the ISS missions were included and the information was organized to take advantage of the computer-based file format available. Human factors heuristics were applied to the prototype and then they were reviewed by the optometrist and procedures specialists with rapid-turn around updates that lead to the final questionnaire. CONCLUSIONS With about only a month lead time, a usable tool to collect crewmember assessments was developed through this cross-discipline collaboration. With only a little expenditure of energy, the potential payoff is great. ISS crewmembers will complete the questionnaire at 30 days into the mission, 100 days into the mission and 30 days prior to return to Earth. The systematic layout may also facilitate physicians later data extraction for quick interpretation of the data. The data collected along with other measures (e.g. retinal and ultrasound imaging) at regular intervals could potentially lead to early detection and treatment of related vision problems than using the other measures alone.
Do bees like Van Gogh's Sunflowers?
NASA Astrophysics Data System (ADS)
Chittka, Lars; Walker, Julian
2006-06-01
Flower colours have evolved over 100 million years to address the colour vision of their bee pollinators. In a much more rapid process, cultural (and horticultural) evolution has produced images of flowers that stimulate aesthetic responses in human observers. The colour vision and analysis of visual patterns differ in several respects between humans and bees. Here, a behavioural ecologist and an installation artist present bumblebees with reproductions of paintings highly appreciated in Western society, such as Van Gogh's Sunflowers. We use this unconventional approach in the hope to raise awareness for between-species differences in visual perception, and to provoke thinking about the implications of biology in human aesthetics and the relationship between object representation and its biological connotations.
Studying the lower limit of human vision with a single-photon source
NASA Astrophysics Data System (ADS)
Holmes, Rebecca; Christensen, Bradley; Street, Whitney; Wang, Ranxiao; Kwiat, Paul
2015-05-01
Humans can detect a visual stimulus of just a few photons. Exactly how few is not known--psychological and physiological research have suggested that the detection threshold may be as low as one photon, but the question has never been directly tested. Using a source of heralded single photons based on spontaneous parametric downconversion, we can directly characterize the lower limit of vision. This system can also be used to study temporal and spatial integration in the visual system, and to study visual attention with EEG. We may eventually even be able to investigate how human observers perceive quantum effects such as superposition and entanglement. Our progress and some preliminary results will be discussed.
Machine Learning, deep learning and optimization in computer vision
NASA Astrophysics Data System (ADS)
Canu, Stéphane
2017-03-01
As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.
Robot vision system programmed in Prolog
NASA Astrophysics Data System (ADS)
Batchelor, Bruce G.; Hack, Ralf
1995-10-01
This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)
Grounding Robot Autonomy in Emotion and Self-awareness
NASA Astrophysics Data System (ADS)
Sanz, Ricardo; Hernández, Carlos; Hernando, Adolfo; Gómez, Jaime; Bermejo, Julita
Much is being done in an attempt to transfer emotional mechanisms from reverse-engineered biology into social robots. There are two basic approaches: the imitative display of emotion —e.g. to intend more human-like robots— and the provision of architectures with intrinsic emotion —in the hope of enhancing behavioral aspects. This paper focuses on the second approach, describing a core vision regarding the integration of cognitive, emotional and autonomic aspects in social robot systems. This vision has evolved as a result of the efforts in consolidating the models extracted from rat emotion research and their implementation in technical use cases based on a general systemic analysis in the framework of the ICEA and C3 projects. The desire for generality of the approach intends obtaining universal theories of integrated —autonomic, emotional, cognitive— behavior. The proposed conceptualizations and architectural principles are then captured in a theoretical framework: ASys — The Autonomous Systems Framework.
Vision-Based Real-Time Traversable Region Detection for Mobile Robot in the Outdoors.
Deng, Fucheng; Zhu, Xiaorui; He, Chao
2017-09-13
Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is that of high computational complexity. Hence, this paper proposes a binocular vision-based, real-time solution for detecting traversable region in the outdoors. In the proposed method, an appearance model based on multivariate Gaussian is quickly constructed from a sample region in the left image adaptively determined by the vanishing point and dominant borders. Then, a fast, self-supervised segmentation scheme is proposed to classify the traversable and non-traversable regions. The proposed method is evaluated on public datasets as well as a real mobile robot. Implementation on the mobile robot has shown its ability in the real-time navigation applications.
Acland, Gregory M.
2014-01-01
Considerable clinical and molecular variations have been known in retinal blinding diseases in man and also in dogs. Different forms of retinal diseases occur in specific breed(s) caused by mutations segregating within each isolated breeding population. While molecular studies to find genes and mutations underlying retinal diseases in dogs have benefited largely from the phenotypic and genetic uniformity within a breed, within- and across-breed variations have often played a key role in elucidating the molecular basis. The increasing knowledge of phenotypic, allelic, and genetic heterogeneities in canine retinal degeneration has shown that the overall picture is rather more complicated than initially thought. Over the past 20 years, various approaches have been developed and tested to search for genes and mutations underlying genetic traits in dogs, depending on the availability of genetic tools and sample resources. Candidate gene, linkage analysis, and genome-wide association studies have so far identified 24 mutations in 18 genes underlying retinal diseases in at least 58 dog breeds. Many of these genes have been associated with retinal diseases in humans, thus providing opportunities to study the role in pathogenesis and in normal vision. Application in therapeutic interventions such as gene therapy has proven successful initially in a naturally occurring dog model followed by trials in human patients. Other genes whose human homologs have not been associated with retinal diseases are potential candidates to explain equivalent human diseases and contribute to the understanding of their function in vision. PMID:22065099
Miyadera, Keiko; Acland, Gregory M; Aguirre, Gustavo D
2012-02-01
Considerable clinical and molecular variations have been known in retinal blinding diseases in man and also in dogs. Different forms of retinal diseases occur in specific breed(s) caused by mutations segregating within each isolated breeding population. While molecular studies to find genes and mutations underlying retinal diseases in dogs have benefited largely from the phenotypic and genetic uniformity within a breed, within- and across-breed variations have often played a key role in elucidating the molecular basis. The increasing knowledge of phenotypic, allelic, and genetic heterogeneities in canine retinal degeneration has shown that the overall picture is rather more complicated than initially thought. Over the past 20 years, various approaches have been developed and tested to search for genes and mutations underlying genetic traits in dogs, depending on the availability of genetic tools and sample resources. Candidate gene, linkage analysis, and genome-wide association studies have so far identified 24 mutations in 18 genes underlying retinal diseases in at least 58 dog breeds. Many of these genes have been associated with retinal diseases in humans, thus providing opportunities to study the role in pathogenesis and in normal vision. Application in therapeutic interventions such as gene therapy has proven successful initially in a naturally occurring dog model followed by trials in human patients. Other genes whose human homologs have not been associated with retinal diseases are potential candidates to explain equivalent human diseases and contribute to the understanding of their function in vision.
Human Neurological Development: Past, Present and Future
NASA Technical Reports Server (NTRS)
Pelligra, R. (Editor)
1978-01-01
Neurological development is considered as the major human potential. Vision, vestibular function, intelligence, and nutrition are discussed as well as the treatment of neurological disfunctions, coma, and convulsive seizures.
Effects of light on brain and behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brainard, G.C.
1994-12-31
It is obvious that light entering the eye permits the sensory capacity of vision. The human species is highly dependent on visual perception of the environment and consequently, the scientific study of vision and visual mechanisms is a centuries old endeavor. Relatively new discoveries are now leading to an expanded understanding of the role of light entering the eye - in addition to supporting vision, light has various nonvisual biological effects. Over the past thirty years, animal studies have shown that environmental light is the primary stimulus for regulating circadian rhythms, seasonal cycles, and neuroendocrine responses. As with all photobiologicalmore » phenomena, the wavelength, intensity, timing and duration of a light stimulus is important in determining its regulatory influence on the circadian and neuroendocrine systems. Initially, the effects of light on rhythms and hormones were observed only in sub-human species. Research over the past decade, however, has confirmed that light entering the eyes of humans is a potent stimulus for controlling physiological rhythms. The aim of this paper is to examine three specific nonvisual responses in humans which are mediated by light entering the eye: light-induced melatonin suppression, light therapy for winter depression, and enhancement of nighttime performance. This will serve as a brief introduction to the growing database which demonstrates how light stimuli can influence physiology, mood and behavior in humans. Such information greatly expands our understanding of the human eye and will ultimately change our use of light in the human environment.« less
Effects of light on brain and behavior
NASA Technical Reports Server (NTRS)
Brainard, George C.
1994-01-01
It is obvious that light entering the eye permits the sensory capacity of vision. The human species is highly dependent on visual perception of the environment and consequently, the scientific study of vision and visual mechanisms is a centuries old endeavor. Relatively new discoveries are now leading to an expanded understanding of the role of light entering the eye in addition to supporting vision, light has various nonvisual biological effects. Over the past thirty years, animal studies have shown that environmental light is the primary stimulus for regulating circadian rhythms, seasonal cycles, and neuroendocrine responses. As with all photobiological phenomena, the wavelength, intensity, timing and duration of a light stimulus is important in determining its regulatory influence on the circadian and neuroendocrine systems. Initially, the effects of light on rhythms and hormones were observed only in sub-human species. Research over the past decade, however, has confirmed that light entering the eyes of humans is a potent stimulus for controlling physiological rhythms. The aim of this paper is to examine three specific nonvisual responses in humans which are mediated by light entering the eye: light-induced melatonin suppression, light therapy for winter depression, and enhancement of nighttime performance. This will serve as a brief introduction to the growing database which demonstrates how light stimuli can influence physiology, mood and behavior in humans. Such information greatly expands our understanding of the human eye and will ultimately change our use of light in the human environment.
Software model of a machine vision system based on the common house fly.
Madsen, Robert; Barrett, Steven; Wilcox, Michael
2005-01-01
The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.
Takemura, Naohiro; Fukui, Takao; Inui, Toshio
2015-01-01
In human reach-to-grasp movement, visual occlusion of a target object leads to a larger peak grip aperture compared to conditions where online vision is available. However, no previous computational and neural network models for reach-to-grasp movement explain the mechanism of this effect. We simulated the effect of online vision on the reach-to-grasp movement by proposing a computational control model based on the hypothesis that the grip aperture is controlled to compensate for both motor variability and sensory uncertainty. In this model, the aperture is formed to achieve a target aperture size that is sufficiently large to accommodate the actual target; it also includes a margin to ensure proper grasping despite sensory and motor variability. To this end, the model considers: (i) the variability of the grip aperture, which is predicted by the Kalman filter, and (ii) the uncertainty of the object size, which is affected by visual noise. Using this model, we simulated experiments in which the effect of the duration of visual occlusion was investigated. The simulation replicated the experimental result wherein the peak grip aperture increased when the target object was occluded, especially in the early phase of the movement. Both predicted motor variability and sensory uncertainty play important roles in the online visuomotor process responsible for grip aperture control. PMID:26696874
NASA Astrophysics Data System (ADS)
Näsilä, Antti; Holmlund, Christer; Mannila, Rami; Näkki, Ismo; Ojanen, Harri J.; Akujärvi, Altti; Saari, Heikki; Fussen, Didier; Pieroux, Didier; Demoulin, Philippe
2016-10-01
PICASSO - A PICo-satellite for Atmospheric and Space Science Observations is an ESA project led by the Belgian Institute for Space Aeronomy, in collaboration with VTT Technical Research Centre of Finland Ltd, Clyde Space Ltd. (UK) and Centre Spatial de Liège (BE). The test campaign for the engineering model of the PICASSO VISION instrument, a miniaturized nanosatellite spectral imager, has been successfully completed. The test results look very promising. The proto-flight model of VISION has also been successfully integrated and it is waiting for the final integration to the satellite platform.
Whole-eye transplantation: a look into the past and vision for the future
Bourne, D; Li, Y; Komatsu, C; Miller, M R; Davidson, E H; He, L; Rosner, I A; Tang, H; Chen, W; Solari, M G; Schuman, J S; Washington, K M
2017-01-01
Blindness afflicts ~39 million people worldwide. Retinal ganglion cells are unable to regenerate, making this condition irreversible in many cases. Whole-eye transplantation (WET) provides the opportunity to replace diseased retinal ganglion cells, as well as the entire optical system and surrounding facial tissue, if necessary. Recent success in face transplantation demonstrates that this may be a promising treatment for what has been to this time an incurable condition. An animal model for WET must be established to further enhance our knowledge of nerve regeneration, immunosuppression, and technical aspects of surgery. A systematic review of the literature was performed to evaluate studies describing animal models for WET. Only articles in which the eye was completely enucleated and reimplanted were included. Study methods and results were compared. In the majority of published literature, WET can result in recovery of vision in cold-blooded vertebrates. There are a few instances in which mammalian WET models demonstrate survival of the transplanted tissue following neurovascular anastomosis and the ability to maintain brief electroretinogram activity in the new host. In this study we review in cold-blooded vertebrates and mammalian animal models for WET and discuss prospects for future research for translation to human eye transplantation. PMID:27983731
Lin, Tai-Chi; Zhu, Danhong; Hinton, David R.; Clegg, Dennis O.; Humayun, Mark S.
2017-01-01
Dysfunction and death of retinal pigment epithelium (RPE) and or photoreceptors can lead to irreversible vision loss. The eye represents an ideal microenvironment for stem cell-based therapy. It is considered an “immune privileged” site, and the number of cells needed for therapy is relatively low for the area of focused vision (macula). Further, surgical placement of stem cell-derived grafts (RPE, retinal progenitors, and photoreceptor precursors) into the vitreous cavity or subretinal space has been well established. For preclinical tests, assessments of stem cell-derived graft survival and functionality are conducted in animal models by various noninvasive approaches and imaging modalities. In vivo experiments conducted in animal models based on replacing photoreceptors and/or RPE cells have shown survival and functionality of the transplanted cells, rescue of the host retina, and improvement of visual function. Based on the positive results obtained from these animal experiments, human clinical trials are being initiated. Despite such progress in stem cell research, ethical, regulatory, safety, and technical difficulties still remain a challenge for the transformation of this technique into a standard clinical approach. In this review, the current status of preclinical safety and efficacy studies for retinal cell replacement therapies conducted in animal models will be discussed. PMID:28928775
Getting ahead: forward models and their place in cognitive architecture.
Pickering, Martin J; Clark, Andy
2014-09-01
The use of forward models (mechanisms that predict the future state of a system) is well established in cognitive and computational neuroscience. We compare and contrast two recent, but interestingly divergent, accounts of the place of forward models in the human cognitive architecture. On the Auxiliary Forward Model (AFM) account, forward models are special-purpose prediction mechanisms implemented by additional circuitry distinct from core mechanisms of perception and action. On the Integral Forward Model (IFM) account, forward models lie at the heart of all forms of perception and action. We compare these neighbouring but importantly different visions and consider their implications for the cognitive sciences. We end by asking what kinds of empirical research might offer evidence favouring one or the other of these approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
A pseudoisochromatic test of color vision for human infants.
Mercer, Michele E; Drodge, Suzanne C; Courage, Mary L; Adams, Russell J
2014-07-01
Despite the development of experimental methods capable of measuring early human color vision, we still lack a procedure comparable to those used to diagnose the well-identified congenital and acquired color vision anomalies in older children, adults, and clinical patients. In this study, we modified a pseudoisochromatic test to make it more suitable for young infants. Using a forced choice preferential looking procedure, 216 3-to-23-mo-old babies were tested with pseudoisochromatic targets that fell on either a red/green or a blue/yellow dichromatic confusion axis. For comparison, 220 color-normal adults and 22 color-deficient adults were also tested. Results showed that all babies and adults passed the blue/yellow target but many of the younger infants failed the red/green target, likely due to the interaction of the lingering immaturities within the visual system and the small CIE vector distance within the red/green plate. However, older (17-23 mo) infants, color- normal adults and color-defective adults all performed according to expectation. Interestingly, performance on the red/green plate was better among female infants, well exceeding the expected rate of genetic dimorphism between genders. Overall, with some further modification, the test serves as a promising tool for the detection of early color vision anomalies in early human life. Copyright © 2014 Elsevier B.V. All rights reserved.
Whited, John D; Datta, Santanu K; Aiello, Lloyd M; Aiello, Lloyd P; Cavallerano, Jerry D; Conlin, Paul R; Horton, Mark B; Vigersky, Robert A; Poropatich, Ronald K; Challa, Pratap; Darkins, Adam W; Bursell, Sven-Erik
2005-12-01
The objective of this study was to compare, using a 12-month time frame, the cost-effectiveness of a non-mydriatic digital tele-ophthalmology system (Joslin Vision Network) versus traditional clinic-based ophthalmoscopy examinations with pupil dilation to detect proliferative diabetic retinopathy and its consequences. Decision analysis techniques, including Monte Carlo simulation, were used to model the use of the Joslin Vision Network versus conventional clinic-based ophthalmoscopy among the entire diabetic populations served by the Indian Health Service, the Department of Veterans Affairs, and the active duty Department of Defense. The economic perspective analyzed was that of each federal agency. Data sources for costs and outcomes included the published literature, epidemiologic data, administrative data, market prices, and expert opinion. Outcome measures included the number of true positive cases of proliferative diabetic retinopathy detected, the number of patients treated with panretinal laser photocoagulation, and the number of cases of severe vision loss averted. In the base-case analyses, the Joslin Vision Network was the dominant strategy in all but two of the nine modeled scenarios, meaning that it was both less costly and more effective. In the active duty Department of Defense population, the Joslin Vision Network would be more effective but cost an extra 1,618 dollars per additional patient treated with panretinal laser photo-coagulation and an additional 13,748 dollars per severe vision loss event averted. Based on our economic model, the Joslin Vision Network has the potential to be more effective than clinic-based ophthalmoscopy for detecting proliferative diabetic retinopathy and averting cases of severe vision loss, and may do so at lower cost.
Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition
Kheradpisheh, Saeed Reza; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée
2016-01-01
Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations. PMID:27601096
The New Face in Leadership: Emotional Intelligence
ERIC Educational Resources Information Center
Greenockle, Karen M.
2010-01-01
In the new millennium we are witnessing the shifts in the global economy, competition, and human resource needs thus, requiring a leader who must have more than a vision that inspires others but be able to execute it successfully to ensure the vision becomes a reality (Dunning, 2000). This emphasis on execution requires a reliance on teamwork and…
B. F. Skinner's Utopian Vision: Behind and beyond "Walden Two"
ERIC Educational Resources Information Center
Altus, Deborah E.; Morris, Edward K.
2009-01-01
This paper addresses B. F. Skinner's utopian vision for enhancing social justice and human wellbeing in his 1948 novel, "Walden Two." In the first part, we situate the book in its historical, intellectual, and social context of the utopian genre, address critiques of the book's premises and practices, and discuss the fate of intentional…
Jordan Reforms Public Education to Compete in a Global Economy
ERIC Educational Resources Information Center
Erickson, Paul W.
2009-01-01
The King of Jordan's vision for education is resulting in innovative projects for the country. King Abdullah II wants Jordan to develop its human resources through public education to equip the workforce with skills for the future. From King Abdullah II's vision, the Education Reform for a Knowledge Economy (ERfKE) project implemented by the…
76 FR 55321 - CooperVision, Inc.; Filing of Color Additive Petitions
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-07
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 73 [Docket Nos. FDA-2011-C-0344 and FDA-2011-C-0463] CooperVision, Inc.; Filing of Color Additive Petitions Correction In proposed rule document C1-2011-16089 appearing on page 49707 in the issue of Thursday, August 11...
A Shared Vision of Human Excellence: Confucian Spirituality and Arts Education
ERIC Educational Resources Information Center
Tan, Charlene; Tan, Leonard
2016-01-01
Spirituality encourages the individual to make sense of oneself within a wider framework of meaning and see oneself as part of some larger whole. This article discusses Confucian spirituality by focusing on the spiritual ideals of "dao" (Way) and "he" (harmony). It is explained that the Way represents a shared vision of human…
Study of the Peculiarities of Color Vision in the Course of "Biophysics" in a Pedagogical University
ERIC Educational Resources Information Center
Petrova, Elena Borisovna; Sabirova, Fairuza Musovna
2016-01-01
The article substantiates the necessity of studying the peculiarities of color vision of human in the course "Biophysics" that have been integrated into many types of higher education institutions. It describes the experience of teaching this discipline in a pedagogical higher education institution. The article presents a brief review of…
JPEG2000 encoding with perceptual distortion control.
Liu, Zhen; Karam, Lina J; Watson, Andrew B
2006-07-01
In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.
Proceedings of the 1986 IEEE international conference on systems, man and cybernetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1986-01-01
This book presents the papers given at a conference on man-machine systems. Topics considered at the conference included neural model-based cognitive theory and engineering, user interfaces, adaptive and learning systems, human interaction with robotics, decision making, the testing and evaluation of expert systems, software development, international conflict resolution, intelligent interfaces, automation in man-machine system design aiding, knowledge acquisition in expert systems, advanced architectures for artificial intelligence, pattern recognition, knowledge bases, and machine vision.
Dougherty, Stephen
2010-01-01
This essay examines the unconscious as modeled by cognitive science and compares it to the psychoanalytic unconscious. In making this comparison, the author underscores the important but usually overlooked fact that computational psychology and psychoanalytic theory are both varieties of posthumanism. He argues that if posthumanism is to advance a vision for our future that is no longer fixated on a normative image of the human, then its own normative claims about the primacy of Darwinian functioning must be disrupted and undermined through a renewed emphasis on its Freudian heritage.
Krueger, Ronald R; Uy, Harvey; McDonald, Jared; Edwards, Keith
2012-12-01
To demonstrate that ultrashort-pulse laser treatment in the crystalline lens does not form a focal, progressive, or vision-threatening cataract. An Nd:vanadate picosecond laser (10 ps) with prototype delivery system was used. Primates: 11 rhesus monkey eyes were prospectively treated at the University of Wisconsin (energy 25-45 μJ/pulse and 2.0-11.3M pulses per lens). Analysis of lens clarity and fundus imaging was assessed postoperatively for up to 4½ years (5 eyes). Humans: 80 presbyopic patients were prospectively treated in one eye at the Asian Eye Institute in the Philippines (energy 10 μJ/pulse and 0.45-1.45M pulses per lens). Analysis of lens clarity, best-corrected visual acuity, and subjective symptoms was performed at 1 month, prior to elective lens extraction. Bubbles were immediately seen, with resolution within the first 24 to 48 hours. Afterwards, the laser pattern could be seen with faint, noncoalescing, pinpoint micro-opacities in both primate and human eyes. In primates, long-term follow-up at 4½ years showed no focal or progressive cataract, except in 2 eyes with preexisting cataract. In humans, <25% of patients with central sparing (0.75 and 1.0 mm radius) lost 2 or more lines of best spectacle-corrected visual acuity at 1 month, and >70% reported acceptable or better distance vision and no or mild symptoms. Meanwhile, >70% without sparing (0 and 0.5 mm radius) lost 2 or more lines, and most reported poor or severe vision and symptoms. Focal, progressive, and vision-threatening cataracts can be avoided by lowering the laser energy, avoiding prior cataract, and sparing the center of the lens.
Krueger, Ronald R.; Uy, Harvey; McDonald, Jared; Edwards, Keith
2012-01-01
Purpose: To demonstrate that ultrashort-pulse laser treatment in the crystalline lens does not form a focal, progressive, or vision-threatening cataract. Methods: An Nd:vanadate picosecond laser (10 ps) with prototype delivery system was used. Primates: 11 rhesus monkey eyes were prospectively treated at the University of Wisconsin (energy 25–45 μJ/pulse and 2.0–11.3M pulses per lens). Analysis of lens clarity and fundus imaging was assessed postoperatively for up to 4½ years (5 eyes). Humans: 80 presbyopic patients were prospectively treated in one eye at the Asian Eye Institute in the Philippines (energy 10 μJ/pulse and 0.45–1.45M pulses per lens). Analysis of lens clarity, best-corrected visual acuity, and subjective symptoms was performed at 1 month, prior to elective lens extraction. Results: Bubbles were immediately seen, with resolution within the first 24 to 48 hours. Afterwards, the laser pattern could be seen with faint, noncoalescing, pinpoint micro-opacities in both primate and human eyes. In primates, long-term follow-up at 4½ years showed no focal or progressive cataract, except in 2 eyes with preexisting cataract. In humans, <25% of patients with central sparing (0.75 and 1.0 mm radius) lost 2 or more lines of best spectacle-corrected visual acuity at 1 month, and >70% reported acceptable or better distance vision and no or mild symptoms. Meanwhile, >70% without sparing (0 and 0.5 mm radius) lost 2 or more lines, and most reported poor or severe vision and symptoms. Conclusions: Focal, progressive, and vision-threatening cataracts can be avoided by lowering the laser energy, avoiding prior cataract, and sparing the center of the lens. PMID:23818739
Sharpening vision by adapting to flicker.
Arnold, Derek H; Williams, Jeremy D; Phipps, Natasha E; Goodale, Melvyn A
2016-11-01
Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision-allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because "blur" signals are mitigated.
Inoue, Makoto; Noda, Toru; Mihashi, Toshifumi; Ohnuma, Kazuhiko; Bissen-Miyajima, Hiroko; Hirakata, Akito
2011-04-01
To evaluate the quality of the image of a grating target placed in a model eye viewed through multifocal intraocular lenses (IOLs). Laboratory investigation. Refractive (NXG1 or PY60MV) or diffractive (ZM900 or SA60D3) multifocal IOLs were placed in a fluid-filled model eye with human corneal aberrations. A United States Air Force resolution target was placed on the posterior surface of the model eye. A flat contact lens or a wide-field contact lens was placed on the cornea. The contrasts of the gratings were evaluated under endoillumination and compared to those obtained through a monofocal IOL. The grating images were clear when viewed through the flat contact lens and through the central far-vision zone of the NXG1 and PY60MV, although those through the near-vision zone were blurred and doubled. The images observed through the central area of the ZM900 with flat contact lens were slightly defocused but the images in the periphery were very blurred. The contrast decreased significantly in low frequencies (P<.001). The images observed through the central diffractive zone of the SA60D3 were slightly blurred, although the images in the periphery were clearer than that of the ZM900. The images were less blurred in all of the refractive and diffractive IOLs with the wide-field contact lens. Refractive and diffractive multifocal IOLs blur the grating target but less with the wide-angle viewing system. The peripheral multifocal optical zone may be more influential on the quality of the images with contact lens system. Copyright © 2011 Elsevier Inc. All rights reserved.
Rooney, Kevin K.; Condia, Robert J.; Loschky, Lester C.
2017-01-01
Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one’s fist at arm’s length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words) PMID:28360867
Rooney, Kevin K; Condia, Robert J; Loschky, Lester C
2017-01-01
Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one's fist at arm's length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words).
DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations.
Kruthiventi, Srinivas S S; Ayush, Kumar; Babu, R Venkatesh
2017-09-01
Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom-up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant-this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.
Biology and therapy of inherited retinal degenerative disease: insights from mouse models
Veleri, Shobi; Lazar, Csilla H.; Chang, Bo; Sieving, Paul A.; Banin, Eyal; Swaroop, Anand
2015-01-01
Retinal neurodegeneration associated with the dysfunction or death of photoreceptors is a major cause of incurable vision loss. Tremendous progress has been made over the last two decades in discovering genes and genetic defects that lead to retinal diseases. The primary focus has now shifted to uncovering disease mechanisms and designing treatment strategies, especially inspired by the successful application of gene therapy in some forms of congenital blindness in humans. Both spontaneous and laboratory-generated mouse mutants have been valuable for providing fundamental insights into normal retinal development and for deciphering disease pathology. Here, we provide a review of mouse models of human retinal degeneration, with a primary focus on diseases affecting photoreceptor function. We also describe models associated with retinal pigment epithelium dysfunction or synaptic abnormalities. Furthermore, we highlight the crucial role of mouse models in elucidating retinal and photoreceptor biology in health and disease, and in the assessment of novel therapeutic modalities, including gene- and stem-cell-based therapies, for retinal degenerative diseases. PMID:25650393
Spatial structure of cone inputs to receptive fields in primate lateral geniculate nucleus
NASA Astrophysics Data System (ADS)
Reid, R. Clay; Shapley, Robert M.
1992-04-01
HUMAN colour vision depends on three classes of cone photoreceptors, those sensitive to short (S), medium (M) or long (L) wavelengths, and on how signals from these cones are combined by neurons in the retina and brain. Macaque monkey colour vision is similar to human, and the receptive fields of macaque visual neurons have been used as an animal model of human colour processing1. P retinal ganglion cells and parvocellular neurons are colour-selective neurons in macaque retina and lateral geniculate nucleus. Interactions between cone signals feeding into these neurons are still unclear. On the basis of experimental results with chromatic adaptation, excitatory and inhibitory inputs from L and M cones onto P cells (and parvocellular neurons) were thought to be quite specific2,3 (Fig. la). But these experiments with spatially diffuse adaptation did not rule out the 'mixed-surround' hypothesis: that there might be one cone-specific mechanism, the receptive field centre, and a surround mechanism connected to all cone types indiscriminately (Fig. le). Recent work has tended to support the mixed-surround hypothesis4-8. We report here the development of new stimuli to measure spatial maps of the linear L-, M- and S-cone inputs to test the hypothesis definitively. Our measurements contradict the mixed-surround hypothesis and imply cone specificity in both centre and surround.
Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena
2013-01-01
In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.
A note on image degradation, disability glare, and binocular vision
NASA Astrophysics Data System (ADS)
Rajaram, Vandana; Lakshminarayanan, Vasudevan
2013-08-01
Disability glare due to scattering of light causes a reduction in visual performance due to a luminous veil over the scene. This causes problem such as contrast detection. In this note, we report a study of the effect of this veiling luminance on human stereoscopic vision. We measured the effect of glare on the horopter measured using the apparent fronto-parallel plane (AFPP) criterion. The empirical longitudinal horopter measured using the AFPP criterion was analyzed using the so-called analytic plot. The analytic plot parameters were used for quantitative measurement of binocular vision. Image degradation plays a major effect on binocular vision as measured by the horopter. Under the conditions tested, it appears that if vision is sufficiently degraded then the addition of disability glare does not seem to significantly cause any further compromise in depth perception as measured by the horopter.
Oestrogen, ocular function and low-level vision: a review.
Hutchinson, Claire V; Walker, James A; Davidson, Colin
2014-11-01
Over the past 10 years, a literature has emerged concerning the sex steroid hormone oestrogen and its role in human vision. Herein, we review evidence that oestrogen (oestradiol) levels may significantly affect ocular function and low-level vision, particularly in older females. In doing so, we have examined a number of vision-related disorders including dry eye, cataract, increased intraocular pressure, glaucoma, age-related macular degeneration and Leber's hereditary optic neuropathy. In each case, we have found oestrogen, or lack thereof, to have a role. We have also included discussion of how oestrogen-related pharmacological treatments for menopause and breast cancer can impact the pathology of the eye and a number of psychophysical aspects of vision. Finally, we have reviewed oestrogen's pharmacology and suggest potential mechanisms underlying its beneficial effects, with particular emphasis on anti-apoptotic and vascular effects. © 2014 Society for Endocrinology.
Combining path integration and remembered landmarks when navigating without vision.
Kalia, Amy A; Schrater, Paul R; Legge, Gordon E
2013-01-01
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-01-01
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510
Combining Path Integration and Remembered Landmarks When Navigating without Vision
Kalia, Amy A.; Schrater, Paul R.; Legge, Gordon E.
2013-01-01
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. PMID:24039742
Computer vision syndrome (CVS) - Thermographic Analysis
NASA Astrophysics Data System (ADS)
Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.
2017-01-01
The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.
Translating Vision into Design: A Method for Conceptual Design Development
NASA Technical Reports Server (NTRS)
Carpenter, Joyce E.
2003-01-01
One of the most challenging tasks for engineers is the definition of design solutions that will satisfy high-level strategic visions and objectives. Even more challenging is the need to demonstrate how a particular design solution supports the high-level vision. This paper describes a process and set of system engineering tools that have been used at the Johnson Space Center to analyze and decompose high-level objectives for future human missions into design requirements that can be used to develop alternative concepts for vehicles, habitats, and other systems. Analysis and design studies of alternative concepts and approaches are used to develop recommendations for strategic investments in research and technology that support the NASA Integrated Space Plan. In addition to a description of system engineering tools, this paper includes a discussion of collaborative design practices for human exploration mission architecture studies used at the Johnson Space Center.
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-03-20
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.
Generic decoding of seen and imagined objects using hierarchical visual features.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-05-22
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
Ecological determinants of health: food and environment on human health.
Li, Alice M L
2017-04-01
Human health and diseases are determined by many complex factors. Health threats from the human-animal-ecosystems interface (HAEI) and zoonotic diseases (zoonoses) impose an increasing risk continuously to public health, from those emerging pathogens transmitted through contact with animals, food, water and contaminated environments. Immense challenges forced on the ecological perspectives on food and the eco-environments, including aquaculture, agriculture and the entire food systems. Impacts of food and eco-environments on human health will be examined amongst the importance of human interventions for intended purposes in lowering the adverse effects on the biodiversity. The complexity of relevant conditions defined as factors contributing to the ecological determinants of health will be illuminated from different perspectives based on concepts, citations, examples and models, in conjunction with harmful consequential effects of human-induced disturbances to our environments and food systems, together with the burdens from ecosystem disruption, environmental hazards and loss of ecosystem functions. The eco-health literacy should be further promoting under the "One Health" vision, with "One World" concept under Ecological Public Health Model for sustaining our environments and the planet earth for all beings, which is coincidentally echoing Confucian's theory for the environmental ethics of ecological harmony.
Overload and Boredom: When the Humanities Turn to Noise.
ERIC Educational Resources Information Center
Walter, James A.
This paper argues that the current debate over humanities curricula has failed to articulate a vision for humanities education because it has turned on a liberal/conservative axis. It also contends that educators must stop debating elite versus democratic values or traditional versus contemporary problems in humanities education, and start…
NASA Astrophysics Data System (ADS)
Tercero, Carlos; Ikeda, Seiichi; Fukuda, Toshio; Arai, Fumihito; Negoro, Makoto; Takahashi, Ikuo
2011-10-01
There is a need to develop quantitative evaluation for simulator based training in medicine. Photoelastic stress analysis can be used in human tissue modeling materials; this enables the development of simulators that measure respect for tissue. For applying this to endovascular surgery, first we present a model of saccular aneurism where stress variation during micro-coils deployment is measured, and then relying on a bi-planar vision system we measure a catheter trajectory and compare it to a reference trajectory considering respect for tissue. New photoelastic tissue modeling materials will expand the applications of this technology to other medical training domains.
Static facial expression recognition with convolution neural networks
NASA Astrophysics Data System (ADS)
Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei
2018-03-01
Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.
Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments
Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun
2017-01-01
In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139
Vision rehabilitation in the case of blindness.
Veraart, Claude; Duret, Florence; Brelén, Marten; Oozeer, Medhy; Delbeke, Jean
2004-09-01
This article examines the various vision rehabilitation procedures that are available for early and late blindness. Depending on the pathology involved, several vision rehabilitation procedures exist, or are in development. Visual aids are available for low vision individuals, as are sensory aids for blind persons. Most noninvasive sensory substitution prostheses as well as implanted visual prostheses in development are reviewed. Issues dealing with vision rehabilitation are also discussed, such as problems of biocompatibility, electrical safety, psychosocial aspects, and ethics. Basic studies devoted to vision rehabilitation such as simulation in mathematical models and simulation of artificial vision are also presented. Finally, the importance of accurate rehabilitation assessment is addressed, and tentative market figures are given.
An overview of quantitative approaches in Gestalt perception.
Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H
2016-09-01
Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Pease, R. Adam
1995-01-01
MIDAS is a set of tools which allow a designer to specify the physical and functional characteristics of a complex system such as an aircraft cockpit, and analyze the system with regard to human performance. MIDAS allows for a number of static analyses such as military standard reach and fit analysis, display legibility analysis, and vision polars. It also supports dynamic simulation of mission segments with 3d visualization. MIDAS development has incorporated several models of human planning behavior. The CaseMIDAS effort has been to provide a simplified and unified approach to modeling task selection behavior. Except for highly practiced, routine procedures, a human operator exhibits a cognitive effort while determining what step to take next in the accomplishment of mission tasks. Current versions of MIDAS do not model this effort in a consistent and inclusive manner. CaseMIDAS also attempts to address this issue. The CaseMIDAS project has yielded an easy to use software module for case creation and execution which is integrated with existing MIDAS simulation components.
Three-dimensional displays and stereo vision
Westheimer, Gerald
2011-01-01
Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes. PMID:21490023
NEIBank: Genomics and bioinformatics resources for vision research
Peterson, Katherine; Gao, James; Buchoff, Patee; Jaworski, Cynthia; Bowes-Rickman, Catherine; Ebright, Jessica N.; Hauser, Michael A.; Hoover, David
2008-01-01
NEIBank is an integrated resource for genomics and bioinformatics in vision research. It includes expressed sequence tag (EST) data and sequence-verified cDNA clones for multiple eye tissues of several species, web-based access to human eye-specific SAGE data through EyeSAGE, and comprehensive, annotated databases of known human eye disease genes and candidate disease gene loci. All expression- and disease-related data are integrated in EyeBrowse, an eye-centric genome browser. NEIBank provides a comprehensive overview of current knowledge of the transcriptional repertoires of eye tissues and their relation to pathology. PMID:18648525
The Role of Prototype Learning in Hierarchical Models of Vision
ERIC Educational Resources Information Center
Thomure, Michael David
2014-01-01
I conduct a study of learning in HMAX-like models, which are hierarchical models of visual processing in biological vision systems. Such models compute a new representation for an image based on the similarity of image sub-parts to a number of specific patterns, called prototypes. Despite being a central piece of the overall model, the issue of…
A New Vision for Institutional Research
ERIC Educational Resources Information Center
Swing, Randy L.; Ross, Leah Ewing
2016-01-01
A new vision for institutional research is urgently needed if colleges and universities are to achieve their institutional missions, goals, and purposes. The authors advocate for a move away from the traditional service model of institutional research to an institutional research function via a federated network model or matrix network model. When…
Guidance of visual attention by semantic information in real-world scenes
Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc
2014-01-01
Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
Temporal dynamics of figure-ground segregation in human vision.
Neri, Peter; Levi, Dennis M
2007-01-01
The segregation of figure from ground is arguably one of the most fundamental operations in human vision. Neural signals reflecting this operation appear in cortex as early as 50 ms and as late as 300 ms after presentation of a visual stimulus, but it is not known when these signals are used by the brain to construct the percepts of figure and ground. We used psychophysical reverse correlation to identify the temporal window for figure-ground signals in human perception and found it to lie within the range of 100-160 ms. Figure enhancement within this narrow temporal window was transient rather than sustained as may be expected from measurements in single neurons. These psychophysical results prompt and guide further electrophysiological studies.
Analyzing thresholds and efficiency with hierarchical Bayesian logistic regression.
Houpt, Joseph W; Bittner, Jennifer L
2018-07-01
Ideal observer analysis is a fundamental tool used widely in vision science for analyzing the efficiency with which a cognitive or perceptual system uses available information. The performance of an ideal observer provides a formal measure of the amount of information in a given experiment. The ratio of human to ideal performance is then used to compute efficiency, a construct that can be directly compared across experimental conditions while controlling for the differences due to the stimuli and/or task specific demands. In previous research using ideal observer analysis, the effects of varying experimental conditions on efficiency have been tested using ANOVAs and pairwise comparisons. In this work, we present a model that combines Bayesian estimates of psychometric functions with hierarchical logistic regression for inference about both unadjusted human performance metrics and efficiencies. Our approach improves upon the existing methods by constraining the statistical analysis using a standard model connecting stimulus intensity to human observer accuracy and by accounting for variability in the estimates of human and ideal observer performance scores. This allows for both individual and group level inferences. Copyright © 2018 Elsevier Ltd. All rights reserved.
Human gait recognition by pyramid of HOG feature on silhouette images
NASA Astrophysics Data System (ADS)
Yang, Guang; Yin, Yafeng; Park, Jeanrok; Man, Hong
2013-03-01
As a uncommon biometric modality, human gait recognition has a great advantage of identify people at a distance without high resolution images. It has attracted much attention in recent years, especially in the fields of computer vision and remote sensing. In this paper, we propose a human gait recognition framework that consists of a reliable background subtraction method followed by the pyramid of Histogram of Gradient (pHOG) feature extraction on the silhouette image, and a Hidden Markov Model (HMM) based classifier. Through background subtraction, the silhouette of human gait in each frame is extracted and normalized from the raw video sequence. After removing the shadow and noise in each region of interest (ROI), pHOG feature is computed on the silhouettes images. Then the pHOG features of each gait class will be used to train a corresponding HMM. In the test stage, pHOG feature will be extracted from each test sequence and used to calculate the posterior probability toward each trained HMM model. Experimental results on the CASIA Gait Dataset B1 demonstrate that with our proposed method can achieve very competitive recognition rate.
A Simplified Method of Identifying the Trained Retinal Locus for Training in Eccentric Viewing
ERIC Educational Resources Information Center
Vukicevic, Meri; Le, Anh; Baglin, James
2012-01-01
In the typical human visual system, the macula allows for high visual resolution. Damage to this area from diseases, such as age-related macular degeneration (AMD), causes the loss of central vision in the form of a central scotoma. Since no treatment is available to reverse AMD, providing low vision rehabilitation to compensate for the loss of…
Managing the human resource: The challenge of change (notes from presentation)
Cary Latham
1999-01-01
How do we, as leaders, get people to embrace change rather than resist it. Getting people to embrace change is what psychologists call having a "superordinate goal." In plain English, a superordinate goal is an attainable vision. When psychologists talk about vision, they mean something that is not more than one to three sentences. It is not a silly mission...
Enhancing Perceptibility of Barely Perceptible Targets
1982-12-28
faster to low spatial frequencies (Breitmeyer, 1975; Tolhurst, 1975; Vassilev & Mitov , 1976;.Breitmeyer & Ganz, 1975; Watson & Nachimas, * 1977). If ground...response than those tuned to high spatial frequencies: they appear to have a shorter latency (Breitmeyer, 1975; Vassilev & Mitov , 1975; Lupp, Bauske, & Wolf...1958. Tolhurst, D. J. Sustained and transient channels in human vision. Vision Research, 1975, 15, 1151-1155. Vassilev, A., & Mitov , D. Perception
"As if Reviewing His Life": Bull Lodge's Narrative and the Mediation of Self-Representation
ERIC Educational Resources Information Center
Gone, Joseph P.
2006-01-01
In 1980, on behalf of the Gros Ventre people, George P. Horse Capture published "The Seven Visions of Bull Lodge, as Told by His Daughter, Garter Snake." "The Seven Visions" describes a lifetime of personal encounters with Powerful other-than-human Persons by the noted Gros Ventre warrior and ritual leader, Bull Lodge (ca.…
ERIC Educational Resources Information Center
Palit, Sukanchan
2016-01-01
Scientific vision and scientific understanding in today's world are in the path of new glory. Chemical Engineering science is witnessing drastic and rapid changes. The metamorphosis of human civilization in this century is faced with vicious challenges. Progress of Chemical Engineering science, the vision of technology and the broad chemical…
Objective evaluation of the visual acuity in human eyes
NASA Astrophysics Data System (ADS)
Rosales, M. A.; López-Olazagasti, E.; Ramírez-Zavaleta, G.; Varillas, G.; Tepichín, E.
2009-08-01
Traditionally, the quality of the human vision is evaluated by a subjective test in which the examiner asks the patient to read a series of characters of different sizes, located at a certain distance of the patient. Typically, we need to ensure a subtended angle of vision of 5 minutes, which implies an object of 8.8 mm high located at 6 meters (normal or 20/20 visual acuity). These characters constitute what is known as the Snellen chart, universally used to evaluate the spatial resolution of the human eyes. The mentioned process of identification of characters is carried out by means of the eye - brain system, giving an evaluation of the subjective visual performance. In this work we consider the eye as an isolated image-forming system, and show that it is possible to isolate the function of the eye from that of the brain in this process. By knowing the impulse response of the eye´s system we can obtain, in advance, the image of the Snellen chart simultaneously. From this information, we obtain the objective performance of the eye as the optical system under test. This type of results might help to detect anomalous situations of the human vision, like the so called "cerebral myopia".
Superior underwater vision in a human population of sea gypsies.
Gislén, Anna; Dacke, Marie; Kröger, Ronald H H; Abrahamsson, Maths; Nilsson, Dan-Eric; Warrant, Eric J
2003-05-13
Humans are poorly adapted for underwater vision. In air, the curved corneal surface accounts for two-thirds of the eye's refractive power, and this is lost when air is replaced by water. Despite this, some tribes of sea gypsies in Southeast Asia live off the sea, and the children collect food from the sea floor without the use of visual aids. This is a remarkable feat when one considers that the human eye is not focused underwater and small objects should remain unresolved. We have measured the visual acuity of children in a sea gypsy population, the Moken, and found that the children see much better underwater than one might expect. Their underwater acuity (6.06 cycles/degree) is more than twice as good as that of European children (2.95 cycles/degree). Our investigations show that the Moken children achieve their superior underwater vision by maximally constricting the pupil (1.96 mm compared to 2.50 mm in European children) and by accommodating to the known limit of human performance (15-16 D). This extreme reaction-which is routine in Moken children-is completely absent in European children. Because they are completely dependent on the sea, the Moken are very likely to derive great benefit from this strategy.
Colour vision experimental studies in teaching of optometry
NASA Astrophysics Data System (ADS)
Ozolinsh, Maris; Ikaunieks, Gatis; Fomins, Sergejs
2005-10-01
Following aspects related to human colour vision are included in experimental lessons for optometry students of University of Latvia. Characteristics of coloured stimuli (emitting and reflective), determination their coordinates in different colour spaces. Objective characteristics of transmitting of colour stimuli through the optical system of eye together with various types of appliances (lenses, prisms, Fresnel prisms). Psychophysical determination of mono- and polychromatic stimuli perception taking into account physiology of eye, retinal colour photoreceptor topography and spectral sensitivity, spatial and temporal characteristics of retinal receptive fields. Ergonomics of visual perception, influence of illumination and glare effects, testing of colour vision deficiencies.
2015-08-21
using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home
Peltzer, Karl; Phaswana-Mafuya, Nancy
2017-07-19
This study aims to estimate the association between visual impairment and low vision and sleep duration and poor sleep quality in a national sample of older adults in South Africa. A national population-based cross-sectional Study of Global Ageing and Adults Health (SAGE) wave 1 was conducted in 2008 with a sample of 3840 individuals aged 50 years or older in South Africa. The interviewer-administered questionnaire assessed socio-demographic characteristics, health variables, sleep duration, quality, visual impairment, and vision. Results indicate that 10.0% of the sample reported short sleep duration (≤5 h), 46.6% long sleep (≥9 h), 9.3% poor sleep quality, 8.4% self-reported and visual impairment (near and/or far vision); and 43.2% measured low vision (near and/or far vision) (0.01-0.25 decimal) and 7.5% low vision (0.01-0.125 decimal). In fully adjusted logistic regression models, self-reported visual impairment was associated with short sleep duration and poor sleep quality, separately and together. Low vision was only associated with long sleep duration and poor sleep quality in unadjusted models. Self-reported visual impairment was related to both short sleep duration and poor sleep quality. Population data on sleep patterns may want to include visual impairment measures.
Re-Design and Beat Testing of the Man-Machine Integration Design and Analysis System: MIDAS
NASA Technical Reports Server (NTRS)
Shively, R. Jay; Rutkowski, Michael (Technical Monitor)
1999-01-01
The Man-machine Design and Analysis System (MIDAS) is a human factors design and analysis system that combines human cognitive models with 3D CAD models and rapid prototyping and simulation techniques. MIDAS allows designers to ask 'what if' types of questions early in concept exploration and development prior to actual hardware development. The system outputs predictions of operator workload, situational awareness and system performance as well as graphical visualization of the cockpit designs interacting with models of the human in a mission scenario. Recently, MIDAS was re-designed to enhance functionality and usability. The goals driving the redesign include more efficient processing, GUI interface, advances in the memory structures, implementation of external vision models and audition. These changes were detailed in an earlier paper. Two Beta test sites with diverse applications have been chosen. One Beta test site is investigating the development of a new airframe and its interaction with the air traffic management system. The second Beta test effort will investigate 3D auditory cueing in conjunction with traditional visual cueing strategies including panel-mounted and heads-up displays. The progress and lessons learned on each of these projects will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, E.G.; Mioduszewski, R.J.
The Chemical Computer Man: Chemical Agent Response Simulation (CARS) is a computer model and simulation program for estimating the dynamic changes in human physiological dysfunction resulting from exposures to chemical-threat nerve agents. The newly developed CARS methodology simulates agent exposure effects on the following five indices of human physiological function: mental, vision, cardio-respiratory, visceral, and limbs. Mathematical models and the application of basic pharmacokinetic principles were incorporated into the simulation so that for each chemical exposure, the relationship between exposure dosage, absorbed dosage (agent blood plasma concentration), and level of physiological response are computed as a function of time. CARS,more » as a simulation tool, is designed for the users with little or no computer-related experience. The model combines maximum flexibility with a comprehensive user-friendly interactive menu-driven system. Users define an exposure problem and obtain immediate results displayed in tabular, graphical, and image formats. CARS has broad scientific and engineering applications, not only in technology for the soldier in the area of Chemical Defense, but also in minimizing animal testing in biomedical and toxicological research and the development of a modeling system for human exposure to hazardous-waste chemicals.« less
Vision: A Conceptual Framework for School Counselors
ERIC Educational Resources Information Center
Watkinson, Jennifer Scaturo
2013-01-01
Vision is essential to the implementation of the American School Counselor Association (ASCA) National Model. Drawing from research in organizational leadership, this article provides a conceptual framework for how school counselors can incorporate vision as a strategy for implementing school counseling programs within the context of practice.…