NASA Astrophysics Data System (ADS)
Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.
2012-02-01
Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.
Vision-based navigation in a dynamic environment for virtual human
NASA Astrophysics Data System (ADS)
Liu, Yan; Sun, Ji-Zhou; Zhang, Jia-Wan; Li, Ming-Chu
2004-06-01
Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.
NASA Astrophysics Data System (ADS)
Lauinger, Norbert
2004-10-01
The human eye is a good model for the engineering of optical correlators. Three prominent intelligent functionalities in human vision could in the near future become realized by a new diffractive-optical hardware design of optical imaging sensors: (1) Illuminant-adaptive RGB-based color Vision, (2) Monocular 3D Vision based on RGB data processing, (3) Patchwise fourier-optical Object-Classification and Identification. The hardware design of the human eye has specific diffractive-optical elements (DOE's) in aperture and in image space and seems to execute the three jobs at -- or not far behind -- the loci of the images of objects.
Vision Systems with the Human in the Loop
NASA Astrophysics Data System (ADS)
Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard
2005-12-01
The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.
Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation
Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin
2013-01-01
With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144
The Common Vision: Parenting and Educating for Wholeness.
ERIC Educational Resources Information Center
Marshak, David
2003-01-01
Presents a spiritually based view of needs and potentials of children and youth, from birth through age 21, based on works of Rudolf Steiner, Sri Aurobindo Ghose, and Hazrat Inayat Khan. Focuses on their common vision of the true nature of human beings, the course of human growth, and the desired functions of child rearing and education.…
Quality grading of Atlantic salmon (Salmo salar) by computer vision.
Misimi, E; Erikson, U; Skavhaug, A
2008-06-01
In this study, we present a promising method of computer vision-based quality grading of whole Atlantic salmon (Salmo salar). Using computer vision, it was possible to differentiate among different quality grades of Atlantic salmon based on the external geometrical information contained in the fish images. Initially, before the image acquisition, the fish were subjectively graded and labeled into grading classes by a qualified human inspector in the processing plant. Prior to classification, the salmon images were segmented into binary images, and then feature extraction was performed on the geometrical parameters of the fish from the grading classes. The classification algorithm was a threshold-based classifier, which was designed using linear discriminant analysis. The performance of the classifier was tested by using the leave-one-out cross-validation method, and the classification results showed a good agreement between the classification done by human inspectors and by the computer vision. The computer vision-based method classified correctly 90% of the salmon from the data set as compared with the classification by human inspector. Overall, it was shown that computer vision can be used as a powerful tool to grade Atlantic salmon into quality grades in a fast and nondestructive manner by a relatively simple classifier algorithm. The low cost of implementation of today's advanced computer vision solutions makes this method feasible for industrial purposes in fish plants as it can replace manual labor, on which grading tasks still rely.
NASA Astrophysics Data System (ADS)
Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil
2018-04-01
Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.
A vision for modernizing environmental risk assessment
In 2007, the US National Research Council (NRC) published a Vision and Strategy for [human health] Toxicity Testing in the 21st century. Central to the vision was increased reliance on high throughput in vitro testing and predictive approaches based on mechanistic understanding o...
Evidence-based medicine: the value of vision screening.
Beauchamp, George R; Ellepola, Chalani; Beauchamp, Cynthia L
2010-01-01
To review the literature for evidence-based medicine (EBM), to assess the evidence for effectiveness of vision screening, and to propose moving toward value-based medicine (VBM) as a preferred basis for comparative effectiveness research. Literature based evidence is applied to five core questions concerning vision screening: (1) Is vision valuable (an inherent good)?; (2) Is screening effective (finding amblyopia)?; (3) What are the costs of screening?; (4) Is treatment effective?; and (5) Is amblyopia detection beneficial? Based on EBM literature and clinical experience, the answers to the five questions are: (1) yes; (2) based on literature, not definitively so; (3) relatively inexpensive, although some claim benefits for more expensive options such as mandatory exams; (4) yes, for compliant care, although treatment processes may have negative aspects such as "bullying"; and (5) economic productive values are likely very high, with returns of investment on the order of 10:1, while human value returns need further elucidation. Additional evidence is required to ascertain the degree to which vision screening is effective. The processes of screening are multiple, sequential, and complicated. The disease is complex, and good visual outcomes require compliance. The value of outcomes is appropriately analyzed in clinical, human, and economic terms.
A physiologically-based model for simulation of color vision deficiency.
Machado, Gustavo M; Oliveira, Manuel M; Fernandes, Leandro A F
2009-01-01
Color vision deficiency (CVD) affects approximately 200 million people worldwide, compromising the ability of these individuals to effectively perform color and visualization-related tasks. This has a significant impact on their private and professional lives. We present a physiologically-based model for simulating color vision. Our model is based on the stage theory of human color vision and is derived from data reported in electrophysiological studies. It is the first model to consistently handle normal color vision, anomalous trichromacy, and dichromacy in a unified way. We have validated the proposed model through an experimental evaluation involving groups of color vision deficient individuals and normal color vision ones. Our model can provide insights and feedback on how to improve visualization experiences for individuals with CVD. It also provides a framework for testing hypotheses about some aspects of the retinal photoreceptors in color vision deficient individuals.
NASA Astrophysics Data System (ADS)
As'ari, M. A.; Sheikh, U. U.
2012-04-01
The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.
Advancing adverse outcome pathways for integrated toxicology and regulatory applications
Recent regulatory efforts in many countries have focused on a toxicological pathway-based vision for human health assessments relying on in vitro systems and predictive models to generate the toxicological data needed to evaluate chemical hazard. A pathway-based vision is equally...
Networks for image acquisition, processing and display
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1990-01-01
The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.
Parallel inputs to memory in bee colour vision.
Horridge, Adrian
2016-03-01
In the 19(th) century, it was found that attraction of bees to light was controlled by light intensity irrespective of colour, and a few critical entomologists inferred that vision of bees foraging on flowers was unlike human colour vision. Therefore, quite justly, Professor Carl von Hess concluded in his book on the Comparative Physiology of Vision (1912) that bees do not distinguish colours in the way that humans enjoy. Immediately, Karl von Frisch, an assistant in the Zoology Department of the same University of Münich, set to work to show that indeed bees have colour vision like humans, thereby initiating a new research tradition, and setting off a decade of controversy that ended only at the death of Hess in 1923. Until 1939, several researchers continued the tradition of trying to untangle the mechanism of bee vision by repeatedly testing trained bees, but made little progress, partly because von Frisch and his legacy dominated the scene. The theory of trichromatic colour vision further developed after three types of receptors sensitive to green, blue, and ultraviolet (UV), were demonstrated in 1964 in the bee. Then, until the end of the century, all data was interpreted in terms of trichromatic colour space. Anomalies were nothing new, but eventually after 1996 they led to the discovery that bees have a previously unknown type of colour vision based on a monochromatic measure and distribution of blue and measures of modulation in green and blue receptor pathways. Meanwhile, in the 20(th) century, search for a suitable rationalization, and explorations of sterile culs-de-sac had filled the literature of bee colour vision, but were based on the wrong theory.
3 CFR 8635 - Proclamation 8635 of March 4, 2011. Save Your Vision Week, 2011
Code of Federal Regulations, 2012 CFR
2012-01-01
... Americans to take action to safeguard their eyesight. Vision is important to our everyday activities, and we... of Health and Human Services’ science-based agenda to prevent disease and promote health, our country...
To See Anew: New Technologies Are Moving Rapidly Toward Restoring or Enabling Vision in the Blind.
Grifantini, Kristina
2017-01-01
Humans have been using technology to improve their vision for many decades. Eyeglasses, contact lenses, and, more recently, laser-based surgeries are commonly employed to remedy vision problems, both minor and major. But options are far fewer for those who have not seen since birth or who have reached stages of blindness in later life.
Different ranking of avian colors predicted by modeling of retinal function in humans and birds.
Håstad, Olle; Odeen, Anders
2008-06-01
Abstract: Only during the past decade have vision-system-neutral methods become common practice in studies of animal color signals. Consequently, much of the current knowledge on sexual selection is based directly or indirectly on human vision, which may or may not emphasize spectral information in a signal differently from the intended receiver. In an attempt to quantify this discrepancy, we used retinal models to test whether human and bird vision rank plumage colors similarly. Of 67 species, human and bird models disagreed in 26 as to which pair of patches in the plumage provides the strongest color contrast or which male in a random pair is the more colorful. These results were only partly attributable to human UV blindness. Despite confirming a strong correlation between avian and human color discrimination, we conclude that a significant proportion of the information in avian visual signals may be lost in translation.
Karmakar, Sougata; Pal, Madhu Sudan; Majumdar, Deepti; Majumdar, Dhurjati
2012-01-01
Ergonomic evaluation of visual demands becomes crucial for the operators/users when rapid decision making is needed under extreme time constraint like navigation task of jet aircraft. Research reported here comprises ergonomic evaluation of pilot's vision in a jet aircraft in virtual environment to demonstrate how vision analysis tools of digital human modeling software can be used effectively for such study. Three (03) dynamic digital pilot models, representative of smallest, average and largest Indian pilot population were generated from anthropometric database and interfaced with digital prototype of the cockpit in Jack software for analysis of vision within and outside the cockpit. Vision analysis tools like view cones, eye view windows, blind spot area, obscuration zone, reflection zone etc. were employed during evaluation of visual fields. Vision analysis tool was also used for studying kinematic changes of pilot's body joints during simulated gazing activity. From present study, it can be concluded that vision analysis tool of digital human modeling software was found very effective in evaluation of position and alignment of different displays and controls in the workstation based upon their priorities within the visual fields and anthropometry of the targeted users, long before the development of its physical prototype.
Photopic transduction implicated in human circadian entrainment
NASA Technical Reports Server (NTRS)
Zeitzer, J. M.; Kronauer, R. E.; Czeisler, C. A.
1997-01-01
Despite the preeminence of light as the synchronizer of the circadian timing system, the phototransductive machinery in mammals which transmits photic information from the retina to the hypothalamic circadian pacemaker remains largely undefined. To determine the class of photopigments which this phototransductive system uses, we exposed a group (n = 7) of human subjects to red light below the sensitivity threshold of a scotopic (i.e. rhodopsin/rod-based) system, yet of sufficient strength to activate a photopic (i.e. cone-based) system. Exposure to this light stimulus was sufficient to reset significantly the human circadian pacemaker, indicating that the cone pigments which mediate color vision can also mediate circadian vision.
Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu
2018-06-22
The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on-off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.
NASA Astrophysics Data System (ADS)
Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu
2018-06-01
The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on–off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.
Head-Mounted Display Technology for Low Vision Rehabilitation and Vision Enhancement
Ehrlich, Joshua R.; Ojeda, Lauro V.; Wicker, Donna; Day, Sherry; Howson, Ashley; Lakshminarayanan, Vasudevan; Moroi, Sayoko E.
2017-01-01
Purpose To describe the various types of head-mounted display technology, their optical and human factors considerations, and their potential for use in low vision rehabilitation and vision enhancement. Design Expert perspective. Methods An overview of head-mounted display technology by an interdisciplinary team of experts drawing on key literature in the field. Results Head-mounted display technologies can be classified based on their display type and optical design. See-through displays such as retinal projection devices have the greatest potential for use as low vision aids. Devices vary by their relationship to the user’s eyes, field of view, illumination, resolution, color, stereopsis, effect on head motion and user interface. These optical and human factors considerations are important when selecting head-mounted displays for specific applications and patient groups. Conclusions Head-mounted display technologies may offer advantages over conventional low vision aids. Future research should compare head-mounted displays to commonly prescribed low vision aids in order to compare their effectiveness in addressing the impairments and rehabilitation goals of diverse patient populations. PMID:28048975
A blind human expert echolocator shows size constancy for objects perceived by echoes.
Milne, Jennifer L; Anello, Mimma; Goodale, Melvyn A; Thaler, Lore
2015-01-01
Some blind humans make clicking noises with their mouth and use the reflected echoes to perceive objects and surfaces. This technique can operate as a crude substitute for vision, allowing human echolocators to perceive silent, distal objects. Here, we tested if echolocation would, like vision, show size constancy. To investigate this, we asked a blind expert echolocator (EE) to echolocate objects of different physical sizes presented at different distances. The EE consistently identified the true physical size of the objects independent of distance. In contrast, blind and blindfolded sighted controls did not show size constancy, even when encouraged to use mouth clicks, claps, or other signals. These findings suggest that size constancy is not a purely visual phenomenon, but that it can operate via an auditory-based substitute for vision, such as human echolocation.
Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe
2013-06-01
Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
Object recognition based on Google's reverse image search and image similarity
NASA Astrophysics Data System (ADS)
Horváth, András.
2015-12-01
Image classification is one of the most challenging tasks in computer vision and a general multiclass classifier could solve many different tasks in image processing. Classification is usually done by shallow learning for predefined objects, which is a difficult task and very different from human vision, which is based on continuous learning of object classes and one requires years to learn a large taxonomy of objects which are not disjunct nor independent. In this paper I present a system based on Google image similarity algorithm and Google image database, which can classify a large set of different objects in a human like manner, identifying related classes and taxonomies.
A Feasibility Study of View-independent Gait Identification
2012-03-01
ice skates . For walking, the footprint records for single pixels form clusters that are well separated in space and time. (Any overlap of contact...Pattern Recognition 2007, 1-8. Cheng M-H, Ho M-F & Huang C-L (2008), "Gait Analysis for Human Identification Through Manifold Learning and HMM... Learning and Cybernetics 2005, 4516-4521 Moeslund T B & Granum E (2001), "A Survey of Computer Vision-Based Human Motion Capture", Computer Vision
Times Past, Times to Come: The Influence of the Past on Visions of the Future.
ERIC Educational Resources Information Center
Masson, Sophie
1997-01-01
Discussion of visions of the future that are based on past history highlights imaginative literature that deals with the human spirit rather than strictly technological innovations. Medieval society, the Roman Empire, mythological atmospheres, and the role of writers are also discussed. (LRW)
TOXICITY TESTING IN THE 21ST CENTURY: A VISION AND A STRATEGY
Krewski, Daniel; Acosta, Daniel; Andersen, Melvin; Anderson, Henry; Bailar, John C.; Boekelheide, Kim; Brent, Robert; Charnley, Gail; Cheung, Vivian G.; Green, Sidney; Kelsey, Karl T.; Kerkvliet, Nancy I.; Li, Abby A.; McCray, Lawrence; Meyer, Otto; Patterson, Reid D.; Pennie, William; Scala, Robert A.; Solomon, Gina M.; Stephens, Martin; Yager, James; Zeise, Lauren
2015-01-01
With the release of the landmark report Toxicity Testing in the 21st Century: A Vision and a Strategy, the U.S. National Academy of Sciences, in 2007, precipitated a major change in the way toxicity testing is conducted. It envisions increased efficiency in toxicity testing and decreased animal usage by transitioning from current expensive and lengthy in vivo testing with qualitative endpoints to in vitro toxicity pathway assays on human cells or cell lines using robotic high-throughput screening with mechanistic quantitative parameters. Risk assessment in the exposed human population would focus on avoiding significant perturbations in these toxicity pathways. Computational systems biology models would be implemented to determine the dose-response models of perturbations of pathway function. Extrapolation of in vitro results to in vivo human blood and tissue concentrations would be based on pharmacokinetic models for the given exposure condition. This practice would enhance human relevance of test results, and would cover several test agents, compared to traditional toxicological testing strategies. As all the tools that are necessary to implement the vision are currently available or in an advanced stage of development, the key prerequisites to achieving this paradigm shift are a commitment to change in the scientific community, which could be facilitated by a broad discussion of the vision, and obtaining necessary resources to enhance current knowledge of pathway perturbations and pathway assays in humans and to implement computational systems biology models. Implementation of these strategies would result in a new toxicity testing paradigm firmly based on human biology. PMID:20574894
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
Mining Videos for Features that Drive Attention
2015-04-01
Psychology & Neuroscience Graduate Program, University of Southern California, 3641 Watt Way, HNB 10, Los Angeles, CA 90089, USA e-mail: itti@usc.edu...challenging question in neuroscience . Since the onset of visual experience, a human or animal begins to form a subjective percept which, depending on...been added based on neuroscience discoveries of mechanisms of vision in the brain as well as useful features based on computer vision. Figure14.1 illus
A New Vision for HRD to Improve Organizational Results
ERIC Educational Resources Information Center
Budd, Marjorie L.; Hannum, Wallace H.
2016-01-01
The intent of this article is to offer guidance based on experience and evidence that can enable organizations to be more effective in preparing their workforce. The authors believe organizations need a new vision, a new way to conceptualize the role of HRD (Human Resource Development) that functions within the organization, responsible for the…
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-01-01
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-03-20
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.
Collaboration between human and nonhuman players in Night Vision Tactical Trainer-Shadow
NASA Astrophysics Data System (ADS)
Berglie, Stephen T.; Gallogly, James J.
2016-05-01
The Night Vision Tactical Trainer - Shadow (NVTT-S) is a U.S. Army-developed training tool designed to improve critical Manned-Unmanned Teaming (MUMT) communication skills for payload operators in Unmanned Aerial Sensor (UAS) crews. The trainer is composed of several Government Off-The-Shelf (GOTS) simulation components and takes the trainee through a series of escalating engagements using tactically relevant, realistically complex, scenarios involving a variety of manned, unmanned, aerial, and ground-based assets. The trainee is the only human player in the game and he must collaborate, from his web-based mock operating station, with various non-human players via spoken natural language over simulated radio in order to execute the training missions successfully. Non-human players are modeled in two complementary layers - OneSAF provides basic background behaviors for entities while NVTT provides higher level models that control entity actions based on intent extracted from the trainee's spoken natural dialog with game entities. Dialog structure is modeled based on Army standards for communication and verbal protocols. This paper presents an architecture that integrates the U.S. Army's Night Vision Image Generator (NVIG), One Semi- Automated Forces (OneSAF), a flight dynamics model, as well as Commercial Off The Shelf (COTS) speech recognition and text to speech products to effect an environment with sufficient entity counts and fidelity to enable meaningful teaching and reinforcement of critical communication skills. It further demonstrates the model dynamics and synchronization mechanisms employed to execute purpose-built training scenarios, and to achieve ad-hoc collaboration on-the-fly between human and non-human players in the simulated environment.
A self-learning camera for the validation of highly variable and pseudorandom patterns
NASA Astrophysics Data System (ADS)
Kelley, Michael
2004-05-01
Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.
Contact Lenses for Color Blindness.
Badawy, Abdel-Rahman; Hassan, Muhammad Umair; Elsherif, Mohamed; Ahmed, Zubair; Yetisen, Ali K; Butt, Haider
2018-06-01
Color vision deficiency (color blindness) is an inherited genetic ocular disorder. While no cure for this disorder currently exists, several methods can be used to increase the color perception of those affected. One such method is the use of color filtering glasses which are based on Bragg filters. While these glasses are effective, they are high cost, bulky, and incompatible with other vision correction eyeglasses. In this work, a rhodamine derivative is incorporated in commercial contact lenses to filter out the specific wavelength bands (≈545-575 nm) to correct color vision blindness. The biocompatibility assessment of the dyed contact lenses in human corneal fibroblasts and human corneal epithelial cells shows no toxicity and cell viability remains at 99% after 72 h. This study demonstrates the potential of the dyed contact lenses in wavelength filtering and color vision deficiency management. © 2018 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Computer vision in cell biology.
Danuser, Gaudenz
2011-11-23
Computer vision refers to the theory and implementation of artificial systems that extract information from images to understand their content. Although computers are widely used by cell biologists for visualization and measurement, interpretation of image content, i.e., the selection of events worth observing and the definition of what they mean in terms of cellular mechanisms, is mostly left to human intuition. This Essay attempts to outline roles computer vision may play and should play in image-based studies of cellular life. Copyright © 2011 Elsevier Inc. All rights reserved.
Sustainable and Autonomic Space Exploration Missions
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Sterritt, Roy; Rouff, Christopher; Rash, James L.; Truszkowski, Walter
2006-01-01
Visions for future space exploration have long term science missions in sight, resulting in the need for sustainable missions. Survivability is a critical property of sustainable systems and may be addressed through autonomicity, an emerging paradigm for self-management of future computer-based systems based on inspiration from the human autonomic nervous system. This paper examines some of the ongoing research efforts to realize these survivable systems visions, with specific emphasis on developments in Autonomic Policies.
Drone-Augmented Human Vision: Exocentric Control for Drones Exploring Hidden Areas.
Erat, Okan; Isop, Werner Alexander; Kalkofen, Denis; Schmalstieg, Dieter
2018-04-01
Drones allow exploring dangerous or impassable areas safely from a distant point of view. However, flight control from an egocentric view in narrow or constrained environments can be challenging. Arguably, an exocentric view would afford a better overview and, thus, more intuitive flight control of the drone. Unfortunately, such an exocentric view is unavailable when exploring indoor environments. This paper investigates the potential of drone-augmented human vision, i.e., of exploring the environment and controlling the drone indirectly from an exocentric viewpoint. If used with a see-through display, this approach can simulate X-ray vision to provide a natural view into an otherwise occluded environment. The user's view is synthesized from a three-dimensional reconstruction of the indoor environment using image-based rendering. This user interface is designed to reduce the cognitive load of the drone's flight control. The user can concentrate on the exploration of the inaccessible space, while flight control is largely delegated to the drone's autopilot system. We assess our system with a first experiment showing how drone-augmented human vision supports spatial understanding and improves natural interaction with the drone.
An Automated Classification Technique for Detecting Defects in Battery Cells
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2006-01-01
Battery cell defect classification is primarily done manually by a human conducting a visual inspection to determine if the battery cell is acceptable for a particular use or device. Human visual inspection is a time consuming task when compared to an inspection process conducted by a machine vision system. Human inspection is also subject to human error and fatigue over time. We present a machine vision technique that can be used to automatically identify defective sections of battery cells via a morphological feature-based classifier using an adaptive two-dimensional fast Fourier transformation technique. The initial area of interest is automatically classified as either an anode or cathode cell view as well as classified as an acceptable or a defective battery cell. Each battery cell is labeled and cataloged for comparison and analysis. The result is the implementation of an automated machine vision technique that provides a highly repeatable and reproducible method of identifying and quantifying defects in battery cells.
NASA Technical Reports Server (NTRS)
Boton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.
2006-01-01
The evaluation of human-centered systems can be performed using a variety of different methodologies. This paper describes a human-centered systems evaluation methodology where participants watch 5-second non-interactive videos of a system in operation before supplying judgments and subjective measures based on the information conveyed in the videos. This methodology was used to evaluate the ability of different textures and fields of view to convey spatial awareness in synthetic vision systems (SVS) displays. It produced significant results for both judgment based and subjective measures. This method is compared to other methods commonly used to evaluate SVS displays based on cost, the amount of experimental time required, experimental flexibility, and the type of data provided.
Design and control of active vision based mechanisms for intelligent robots
NASA Technical Reports Server (NTRS)
Wu, Liwei; Marefat, Michael M.
1994-01-01
In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.
Remote Safety Monitoring for Elderly Persons Based on Omni-Vision Analysis
Xiang, Yun; Tang, Yi-ping; Ma, Bao-qing; Yan, Hang-chen; Jiang, Jun; Tian, Xu-yuan
2015-01-01
Remote monitoring service for elderly persons is important as the aged populations in most developed countries continue growing. To monitor the safety and health of the elderly population, we propose a novel omni-directional vision sensor based system, which can detect and track object motion, recognize human posture, and analyze human behavior automatically. In this work, we have made the following contributions: (1) we develop a remote safety monitoring system which can provide real-time and automatic health care for the elderly persons and (2) we design a novel motion history or energy images based algorithm for motion object tracking. Our system can accurately and efficiently collect, analyze, and transfer elderly activity information and provide health care in real-time. Experimental results show that our technique can improve the data analysis efficiency by 58.5% for object tracking. Moreover, for the human posture recognition application, the success rate can reach 98.6% on average. PMID:25978761
Human vision is determined based on information theory.
Delgado-Bonal, Alfonso; Martín-Torres, Javier
2016-11-03
It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.
Human vision is determined based on information theory
NASA Astrophysics Data System (ADS)
Delgado-Bonal, Alfonso; Martín-Torres, Javier
2016-11-01
It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.
Human vision is determined based on information theory
Delgado-Bonal, Alfonso; Martín-Torres, Javier
2016-01-01
It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition. PMID:27808236
Cideciyan, Artur V.; Aleman, Tomas S.; Boye, Sanford L.; Schwartz, Sharon B.; Kaushal, Shalesh; Roman, Alejandro J.; Pang, Ji-jing; Sumaroka, Alexander; Windsor, Elizabeth A. M.; Wilson, James M.; Flotte, Terence R.; Fishman, Gerald A.; Heon, Elise; Stone, Edwin M.; Byrne, Barry J.; Jacobson, Samuel G.; Hauswirth, William W.
2008-01-01
The RPE65 gene encodes the isomerase of the retinoid cycle, the enzymatic pathway that underlies mammalian vision. Mutations in RPE65 disrupt the retinoid cycle and cause a congenital human blindness known as Leber congenital amaurosis (LCA). We used adeno-associated virus-2-based RPE65 gene replacement therapy to treat three young adults with RPE65-LCA and measured their vision before and up to 90 days after the intervention. All three patients showed a statistically significant increase in visual sensitivity at 30 days after treatment localized to retinal areas that had received the vector. There were no changes in the effect between 30 and 90 days. Both cone- and rod-photoreceptor-based vision could be demonstrated in treated areas. For cones, there were increases of up to 1.7 log units (i.e., 50 fold); and for rods, there were gains of up to 4.8 log units (i.e., 63,000 fold). To assess what fraction of full vision potential was restored by gene therapy, we related the degree of light sensitivity to the level of remaining photoreceptors within the treatment area. We found that the intervention could overcome nearly all of the loss of light sensitivity resulting from the biochemical blockade. However, this reconstituted retinoid cycle was not completely normal. Resensitization kinetics of the newly treated rods were remarkably slow and required 8 h or more for the attainment of full sensitivity, compared with <1 h in normal eyes. Cone-sensitivity recovery time was rapid. These results demonstrate dramatic, albeit imperfect, recovery of rod- and cone-photoreceptor-based vision after RPE65 gene therapy. PMID:18809924
Misimi, E; Mathiassen, J R; Erikson, U
2007-01-01
Computer vision method was used to evaluate the color of Atlantic salmon (Salmo salar) fillets. Computer vision-based sorting of fillets according to their color was studied on 2 separate groups of salmon fillets. The images of fillets were captured using a digital camera of high resolution. Images of salmon fillets were then segmented in the regions of interest and analyzed in red, green, and blue (RGB) and CIE Lightness, redness, and yellowness (Lab) color spaces, and classified according to the Roche color card industrial standard. Comparisons of fillet color between visual evaluations were made by a panel of human inspectors, according to the Roche SalmoFan lineal standard, and the color scores generated from computer vision algorithm showed that there were no significant differences between the methods. Overall, computer vision can be used as a powerful tool to sort fillets by color in a fast and nondestructive manner. The low cost of implementing computer vision solutions creates the potential to replace manual labor in fish processing plants with automation.
Vision Algorithms Catch Defects in Screen Displays
NASA Technical Reports Server (NTRS)
2014-01-01
Andrew Watson, a senior scientist at Ames Research Center, developed a tool called the Spatial Standard Observer (SSO), which models human vision for use in robotic applications. Redmond, Washington-based Radiant Zemax LLC licensed the technology from NASA and combined it with its imaging colorimeter system, creating a powerful tool that high-volume manufacturers of flat-panel displays use to catch defects in screens.
Clinically Normal Stereopsis Does Not Ensure Performance Benefit from Stereoscopic 3D Depth Cues
2014-10-28
Stereopsis, Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 16...Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 1 Distribution A: Approved
Enhanced computer vision with Microsoft Kinect sensor: a review.
Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie
2013-10-01
With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.
A Review on Human Activity Recognition Using Vision-Based Method.
Zhang, Shugang; Wei, Zhiqiang; Nie, Jie; Huang, Lei; Wang, Shuang; Li, Zhen
2017-01-01
Human activity recognition (HAR) aims to recognize activities from a series of observations on the actions of subjects and the environmental conditions. The vision-based HAR research is the basis of many applications including video surveillance, health care, and human-computer interaction (HCI). This review highlights the advances of state-of-the-art activity recognition approaches, especially for the activity representation and classification methods. For the representation methods, we sort out a chronological research trajectory from global representations to local representations, and recent depth-based representations. For the classification methods, we conform to the categorization of template-based methods, discriminative models, and generative models and review several prevalent methods. Next, representative and available datasets are introduced. Aiming to provide an overview of those methods and a convenient way of comparing them, we classify existing literatures with a detailed taxonomy including representation and classification methods, as well as the datasets they used. Finally, we investigate the directions for future research.
A Review on Human Activity Recognition Using Vision-Based Method
Nie, Jie
2017-01-01
Human activity recognition (HAR) aims to recognize activities from a series of observations on the actions of subjects and the environmental conditions. The vision-based HAR research is the basis of many applications including video surveillance, health care, and human-computer interaction (HCI). This review highlights the advances of state-of-the-art activity recognition approaches, especially for the activity representation and classification methods. For the representation methods, we sort out a chronological research trajectory from global representations to local representations, and recent depth-based representations. For the classification methods, we conform to the categorization of template-based methods, discriminative models, and generative models and review several prevalent methods. Next, representative and available datasets are introduced. Aiming to provide an overview of those methods and a convenient way of comparing them, we classify existing literatures with a detailed taxonomy including representation and classification methods, as well as the datasets they used. Finally, we investigate the directions for future research. PMID:29065585
A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context
Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco
2014-01-01
Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services. PMID:24854209
A vision-based system for intelligent monitoring: human behaviour analysis and privacy by context.
Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco
2014-05-20
Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.
Machine vision method for online surface inspection of easy open can ends
NASA Astrophysics Data System (ADS)
Mariño, Perfecto; Pastoriza, Vicente; Santamaría, Miguel
2006-10-01
Easy open can end manufacturing process in the food canning sector currently makes use of a manual, non-destructive testing procedure to guarantee can end repair coating quality. This surface inspection is based on a visual inspection made by human inspectors. Due to the high production rate (100 to 500 ends per minute) only a small part of each lot is verified (statistical sampling), then an automatic, online, inspection system, based on machine vision, has been developed to improve this quality control. The inspection system uses a fuzzy model to make the acceptance/rejection decision for each can end from the information obtained by the vision sensor. In this work, the inspection method is presented. This surface inspection system checks the total production, classifies the ends in agreement with an expert human inspector, supplies interpretability to the operators in order to find out the failure causes and reduce mean time to repair during failures, and allows to modify the minimum can end repair coating quality.
Jian, Bo-Lin; Peng, Chao-Chung
2017-06-15
Due to the direct influence of night vision equipment availability on the safety of night-time aerial reconnaissance, maintenance needs to be carried out regularly. Unfortunately, some defects are not easy to observe or are not even detectable by human eyes. As a consequence, this study proposed a novel automatic defect detection system for aviator's night vision imaging systems AN/AVS-6(V)1 and AN/AVS-6(V)2. An auto-focusing process consisting of a sharpness calculation and a gradient-based variable step search method is applied to achieve an automatic detection system for honeycomb defects. This work also developed a test platform for sharpness measurement. It demonstrates that the honeycomb defects can be precisely recognized and the number of the defects can also be determined automatically during the inspection. Most importantly, the proposed approach significantly reduces the time consumption, as well as human assessment error during the night vision goggle inspection procedures.
Li, Yi; Chen, Yuren
2016-12-30
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.
Studying the lower limit of human vision with a single-photon source
NASA Astrophysics Data System (ADS)
Holmes, Rebecca; Christensen, Bradley; Street, Whitney; Wang, Ranxiao; Kwiat, Paul
2015-05-01
Humans can detect a visual stimulus of just a few photons. Exactly how few is not known--psychological and physiological research have suggested that the detection threshold may be as low as one photon, but the question has never been directly tested. Using a source of heralded single photons based on spontaneous parametric downconversion, we can directly characterize the lower limit of vision. This system can also be used to study temporal and spatial integration in the visual system, and to study visual attention with EEG. We may eventually even be able to investigate how human observers perceive quantum effects such as superposition and entanglement. Our progress and some preliminary results will be discussed.
Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffith, Douglas; Greitzer, Frank L.
We re-address the vision of human-computer symbiosis expressed by J. C. R. Licklider nearly a half-century ago, when he wrote: “The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” (Licklider, 1960). Unfortunately, little progress was made toward this vision over four decades following Licklider’s challenge, despite significant advancements in the fields of human factors and computer science. Licklider’s vision wasmore » largely forgotten. However, recent advances in information science and technology, psychology, and neuroscience have rekindled the potential of making the Licklider’s vision a reality. This paper provides a historical context for and updates the vision, and it argues that such a vision is needed as a unifying framework for advancing IS&T.« less
Three-dimensional displays and stereo vision
Westheimer, Gerald
2011-01-01
Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes. PMID:21490023
NEIBank: Genomics and bioinformatics resources for vision research
Peterson, Katherine; Gao, James; Buchoff, Patee; Jaworski, Cynthia; Bowes-Rickman, Catherine; Ebright, Jessica N.; Hauser, Michael A.; Hoover, David
2008-01-01
NEIBank is an integrated resource for genomics and bioinformatics in vision research. It includes expressed sequence tag (EST) data and sequence-verified cDNA clones for multiple eye tissues of several species, web-based access to human eye-specific SAGE data through EyeSAGE, and comprehensive, annotated databases of known human eye disease genes and candidate disease gene loci. All expression- and disease-related data are integrated in EyeBrowse, an eye-centric genome browser. NEIBank provides a comprehensive overview of current knowledge of the transcriptional repertoires of eye tissues and their relation to pathology. PMID:18648525
Competency-based training model for human resource management and development in public sector
NASA Astrophysics Data System (ADS)
Prabawati, I.; Meirinawati; AOktariyanda, T.
2018-01-01
Human Resources (HR) is a very important factor in an organization so that human resources are required to have the ability, skill or competence in order to be able to carry out the vision and mission of the organization. Competence includes a number of attributes attached to the individual which is a combination of knowledge, skills, and behaviors that can be used as a mean to improve performance. Concerned to the demands of human resources that should have the knowledge, skills or abilities, it is necessary to the development of human resources in public organizations. One form of human resource development is Competency-Based Training (CBT). CBT focuses on three issues, namely skills, competencies, and competency standard. There are 5 (five) strategies in the implementation of CBT, namely: organizational scanning, strategic planning, competency profiling, competency gap analysis, and competency development. Finally, through CBT the employees within the organization can reduce or eliminate the differences between existing performance with a potential performance that can improve the knowledge, expertise, and skills that are very supportive in achieving the vision and mission of the organization.
Episodic Reasoning for Vision-Based Human Action Recognition
Martinez-del-Rincon, Jesus
2014-01-01
Smart Spaces, Ambient Intelligence, and Ambient Assisted Living are environmental paradigms that strongly depend on their capability to recognize human actions. While most solutions rest on sensor value interpretations and video analysis applications, few have realized the importance of incorporating common-sense capabilities to support the recognition process. Unfortunately, human action recognition cannot be successfully accomplished by only analyzing body postures. On the contrary, this task should be supported by profound knowledge of human agency nature and its tight connection to the reasons and motivations that explain it. The combination of this knowledge and the knowledge about how the world works is essential for recognizing and understanding human actions without committing common-senseless mistakes. This work demonstrates the impact that episodic reasoning has in improving the accuracy of a computer vision system for human action recognition. This work also presents formalization, implementation, and evaluation details of the knowledge model that supports the episodic reasoning. PMID:24959602
Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review
Ali, Asraf; Sundaraj, Kenneth; Ahmad, Badlishah; Ahamed, Nizam; Islam, Anamul
2012-01-01
Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders. PMID:22938548
Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System
2016-01-01
This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165
Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.
Xiong, Chunshui; Huang, Lei; Liu, Changping
2014-01-01
Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.
iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones
NASA Astrophysics Data System (ADS)
Choi, Junyeong; Park, Jungsik; Park, Hanhoon; Park, Jong-Il
2013-02-01
The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand's palm through a built-in camera. The virtual contents are faithfully rendered on the user's palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.
Generic decoding of seen and imagined objects using hierarchical visual features.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-05-22
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
Human Detection from a Mobile Robot Using Fusion of Laser and Vision Information
Fotiadis, Efstathios P.; Garzón, Mario; Barrientos, Antonio
2013-01-01
This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method. PMID:24008280
Human detection from a mobile robot using fusion of laser and vision information.
Fotiadis, Efstathios P; Garzón, Mario; Barrientos, Antonio
2013-09-04
This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method.
Development of embedded real-time and high-speed vision platform
NASA Astrophysics Data System (ADS)
Ouyang, Zhenxing; Dong, Yimin; Yang, Hua
2015-12-01
Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.
Computer vision-based classification of hand grip variations in neurorehabilitation.
Zariffa, José; Steeves, John D
2011-01-01
The complexity of hand function is such that most existing upper limb rehabilitation robotic devices use only simplified hand interfaces. This is in contrast to the importance of the hand in regaining function after neurological injury. Computer vision technology has been used to identify hand posture in the field of Human Computer Interaction, but this approach has not been translated to the rehabilitation context. We describe a computer vision-based classifier that can be used to discriminate rehabilitation-relevant hand postures, and could be integrated into a virtual reality-based upper limb rehabilitation system. The proposed system was tested on a set of video recordings from able-bodied individuals performing cylindrical grasps, lateral key grips, and tip-to-tip pinches. The overall classification success rate was 91.2%, and was above 98% for 6 out of the 10 subjects. © 2011 IEEE
NASA Technical Reports Server (NTRS)
Crouch, Roger
2004-01-01
Viewgraphs on NASA's transition to its vision for space exploration is presented. The topics include: 1) Strategic Directives Guiding the Human Support Technology Program; 2) Progressive Capabilities; 3) A Journey to Inspire, Innovate, and Discover; 4) Risk Mitigation Status Technology Readiness Level (TRL) and Countermeasures Readiness Level (CRL); 5) Biological And Physical Research Enterprise Aligning With The Vision For U.S. Space Exploration; 6) Critical Path Roadmap Reference Missions; 7) Rating Risks; 8) Current Critical Path Roadmap (Draft) Rating Risks: Human Health; 9) Current Critical Path Roadmap (Draft) Rating Risks: System Performance/Efficiency; 10) Biological And Physical Research Enterprise Efforts to Align With Vision For U.S. Space Exploration; 11) Aligning with the Vision: Exploration Research Areas of Emphasis; 12) Code U Efforts To Align With The Vision For U.S. Space Exploration; 13) Types of Critical Path Roadmap Risks; and 14) ISS Human Support Systems Research, Development, and Demonstration. A summary discussing the vision for U.S. space exploration is also provided.
Adaptive Optics for the Human Eye
NASA Astrophysics Data System (ADS)
Williams, D. R.
2000-05-01
Adaptive optics can extend not only the resolution of ground-based telescopes, but also the human eye. Both static and dynamic aberrations in the cornea and lens of the normal eye limit its optical quality. Though it is possible to correct defocus and astigmatism with spectacle lenses, higher order aberrations remain. These aberrations blur vision and prevent us from seeing at the fundamental limits set by the retina and brain. They also limit the resolution of cameras to image the living retina, cameras that are a critical for the diagnosis and treatment of retinal disease. I will describe an adaptive optics system that measures the wave aberration of the eye in real time and compensates for it with a deformable mirror, endowing the human eye with unprecedented optical quality. This instrument provides fresh insight into the ultimate limits on human visual acuity, reveals for the first time images of the retinal cone mosaic responsible for color vision, and points the way to contact lenses and laser surgical methods that could enhance vision beyond what is currently possible today. Supported by the NSF Science and Technology Center for Adaptive Optics, the National Eye Institute, and Bausch and Lomb, Inc.
2017-01-01
AFRL-SA-WP-SR-2017-0001 Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision...TITLE AND SUBTITLE Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision 5a. CONTRACT...STINFO COPY NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any
Pongrácz, Péter; Ujvári, Vera; Faragó, Tamás; Miklósi, Ádám; Péter, András
2017-07-01
The visual sense of dogs is in many aspects different than that of humans. Unfortunately, authors do not explicitly take into consideration dog-human differences in visual perception when designing their experiments. With an image manipulation program we altered stationary images, according to the present knowledge about dog-vision. Besides the effect of dogs' dichromatic vision, the software shows the effect of the lower visual acuity and brightness discrimination, too. Fifty adult humans were tested with pictures showing a female experimenter pointing, gazing or glancing to the left or right side. Half of the pictures were shown after they were altered to a setting that approximated dog vision. Participants had difficulty to find out the direction of glancing when the pictures were in dog-vision mode. Glances in dog-vision setting were followed less correctly and with a slower response time than other cues. Our results are the first that show the visual performance of humans under circumstances that model how dogs' weaker vision would affect their responses in an ethological experiment. We urge researchers to take into consideration the differences between perceptual abilities of dogs and humans, by developing visual stimuli that fit more appropriately to dogs' visual capabilities. Copyright © 2017 Elsevier B.V. All rights reserved.
Relating Standardized Visual Perception Measures to Simulator Visual System Performance
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Sweet, Barbara T.
2013-01-01
Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).
Color Vision and Hue Categorization in Young Human Infants
ERIC Educational Resources Information Center
Bornstein, Marc H.; And Others
1976-01-01
The main objective of the present investigations was to determine whether or not young human infants see the physical spectrum in a categorical fashion as human adults and animals who possess color vision regularly do. (Author)
Chinellato, Eris; Del Pobil, Angel P
2009-06-01
The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.
Bhattacharya, Sudin; Zhang, Qiang; Carmichael, Paul L.; Boekelheide, Kim; Andersen, Melvin E.
2011-01-01
The approaches to quantitatively assessing the health risks of chemical exposure have not changed appreciably in the past 50 to 80 years, the focus remaining on high-dose studies that measure adverse outcomes in homogeneous animal populations. This expensive, low-throughput approach relies on conservative extrapolations to relate animal studies to much lower-dose human exposures and is of questionable relevance to predicting risks to humans at their typical low exposures. It makes little use of a mechanistic understanding of the mode of action by which chemicals perturb biological processes in human cells and tissues. An alternative vision, proposed by the U.S. National Research Council (NRC) report Toxicity Testing in the 21st Century: A Vision and a Strategy, called for moving away from traditional high-dose animal studies to an approach based on perturbation of cellular responses using well-designed in vitro assays. Central to this vision are (a) “toxicity pathways” (the innate cellular pathways that may be perturbed by chemicals) and (b) the determination of chemical concentration ranges where those perturbations are likely to be excessive, thereby leading to adverse health effects if present for a prolonged duration in an intact organism. In this paper we briefly review the original NRC report and responses to that report over the past 3 years, and discuss how the change in testing might be achieved in the U.S. and in the European Union (EU). EU initiatives in developing alternatives to animal testing of cosmetic ingredients have run very much in parallel with the NRC report. Moving from current practice to the NRC vision would require using prototype toxicity pathways to develop case studies showing the new vision in action. In this vein, we also discuss how the proposed strategy for toxicity testing might be applied to the toxicity pathways associated with DNA damage and repair. PMID:21701582
Basic design principles of colorimetric vision systems
NASA Astrophysics Data System (ADS)
Mumzhiu, Alex M.
1998-10-01
Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.
Spatial imaging in color and HDR: prometheus unchained
NASA Astrophysics Data System (ADS)
McCann, John J.
2013-03-01
The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.
SMALL COLOUR VISION VARIATIONS AND THEIR EFFECT IN VISUAL COLORIMETRY,
COLOR VISION, PERFORMANCE(HUMAN), TEST EQUIPMENT, PERFORMANCE(HUMAN), CORRELATION TECHNIQUES, STATISTICAL PROCESSES, COLORS, ANALYSIS OF VARIANCE, AGING(MATERIALS), COLORIMETRY , BRIGHTNESS, ANOMALIES, PLASTICS, UNITED KINGDOM.
Machine Vision Giving Eyes to Robots. Resources in Technology.
ERIC Educational Resources Information Center
Technology Teacher, 1990
1990-01-01
This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)
Low Cost Night Vision System for Intruder Detection
NASA Astrophysics Data System (ADS)
Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.
2016-02-01
The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.
A digital retina-like low-level vision processor.
Mertoguno, S; Bourbakis, N G
2003-01-01
This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.
Choi, Yun Jeong; Lim, Ji Young; Lee, Young Whee; Kim, Hwa Soon
2008-10-01
The purpose of this study was to develop visions of nursing service, nursing strategies and key performance indicators (KPIs) for an intensive care unit (ICU) based on a Balanced Scorecard (BSC). This study was undertaken by using methodological research. The development process consisted of four phases; the first phase was to develop the vision of nursing in ICUs. The second phase was to develop strategies according to 4 perspectives of a BSC. The third phase was to develop KPIs according to the 4 perspectives of BSC and the final phase was to combine the nursing visions, strategies and KPIs of ICUs. Two main visions of nursing service for ICUs were established. These were 'realization of harmonized professional nursing with human respect' and 'recovery of health through specialized nursing' respectively. In order to reach the aim of developing nursing visions, thirteen practical strategies and nineteen KPIs were developed by four perspectives of the BSC. The results will be used as objective fundamental data to attain business outcomes for the achievement of nursing visions and strategies of ICUs.
Are visual peripheries forever young?
Burnat, Kalina
2015-01-01
The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.
NASA Technical Reports Server (NTRS)
Rhodes, Bradley; Meck, Janice
2005-01-01
NASA s National Vision for Space Exploration includes human travel beyond low earth orbit and the ultimate safe return of the crews. Crucial to fulfilling the vision is the successful and timely development of countermeasures for the adverse physiological effects on human systems caused by long term exposure to the microgravity environment. Limited access to in-flight resources for the foreseeable future increases NASA s reliance on ground-based analogs to simulate these effects of microgravity. The primary analog for human based research will be head-down bed rest. By this approach NASA will be able to evaluate countermeasures in large sample sizes, perform preliminary evaluations of proposed in-flight protocols and assess the utility of individual or combined strategies before flight resources are requested. In response to this critical need, NASA has created the Bed Rest Project at the Johnson Space Center. The Project establishes the infrastructure and processes to provide a long term capability for standardized domestic bed rest studies and countermeasure development. The Bed Rest Project design takes a comprehensive, interdisciplinary, integrated approach that reduces the resource overhead of one investigator for one campaign. In addition to integrating studies operationally relevant for exploration, the Project addresses other new Vision objectives, namely: 1) interagency cooperation with the NIH allows for Clinical Research Center (CRC) facility sharing to the benefit of both agencies, 2) collaboration with our International Partners expands countermeasure development opportunities for foreign and domestic investigators as well as promotes consistency in approach and results, 3) to the greatest degree possible, the Project also advances research by clinicians and academia alike to encourage return to earth benefits. This paper will describe the Project s top level goals, organization and relationship to other Exploration Vision Projects, implementation strategy, address Project deliverables, schedules and provide a status of bed rest campaigns presently underway.
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.
Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro
2018-01-01
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro
2018-01-01
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept. PMID:29593521
An egocentric vision based assistive co-robot.
Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang
2013-06-01
We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.
Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems
NASA Technical Reports Server (NTRS)
Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.
1992-01-01
This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.
Dynamic Trajectory Extraction from Stereo Vision Using Fuzzy Clustering
NASA Astrophysics Data System (ADS)
Onishi, Masaki; Yoda, Ikushi
In recent years, many human tracking researches have been proposed in order to analyze human dynamic trajectory. These researches are general technology applicable to various fields, such as customer purchase analysis in a shopping environment and safety control in a (railroad) crossing. In this paper, we present a new approach for tracking human positions by stereo image. We use the framework of two-stepped clustering with k-means method and fuzzy clustering to detect human regions. In the initial clustering, k-means method makes middle clusters from objective features extracted by stereo vision at high speed. In the last clustering, c-means fuzzy method cluster middle clusters based on attributes into human regions. Our proposed method can be correctly clustered by expressing ambiguity using fuzzy clustering, even when many people are close to each other. The validity of our technique was evaluated with the experiment of trajectories extraction of doctors and nurses in an emergency room of a hospital.
Processing of Space Resources to Enable the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Curreri, Peter A.
2006-01-01
The NASA human exploration program as directed by the Vision for Exploration (G.W. Bush, Jan. 14,2004) includes developing methods to process materials on the Moon and beyond to enable safe and affordable human exploration. Processing space resources was first popularized (O Neill 1976) as a technically viable, economically feasible means to build city sized habitats and multi GWatt solar power satellites in Earth/Moon space. Although NASA studies found the concepts to be technically reasonable in the post Apollo era (AMES 1979), the front end costs the limits of national or corporate investment. In the last decade analysis of space on has shown it to be economically justifiable even on a relatively small mission or commercial scenario basis. The Mars Reference Mission analysis (JSC 1997) demonstrated that production of return propellant on Mars can enable an order of magnitude decrease in the costs of human Mars missions. Analysis (by M. Duke 2003) shows that production of propellant on the Moon for the Earth based satellite industries can be commercially viable after a human lunar base is established. Similar economic analysis (Rapp 2005) also shows large cost benefits for lunar propellant production for Mars missions and for the use of lunar materials for the production of photovoltaic power (Freundlich 2005). Recent technologies could enable much smaller initial costs, to achieve mass, energy, and life support self sufficiency, than were achievable in the 1970s. If the Exploration Vision program is executed with a front end emphasis on space resources, it could provide a path for human self reliance beyond Earth orbit. This path can lead to an open, non-zero-sum, future for humanity with safer human competition with limitless growth potential. This paper discusses extension of the analysis for space resource utilization, to determine the minimum systems necessary for human self sufficiency and growth off Earth. Such a approach can provide a more compelling and comprehensive path to space resource utilization.
Development of machine-vision system for gap inspection of muskmelon grafted seedlings.
Liu, Siyao; Xing, Zuochang; Wang, Zifan; Tian, Subo; Jahun, Falalu Rabiu
2017-01-01
Grafting robots have been developed in the world, but some auxiliary works such as gap-inspecting for grafted seedlings still need to be done by human. An machine-vision system of gap inspection for grafted muskmelon seedlings was developed in this study. The image acquiring system consists of a CCD camera, a lens and a front white lighting source. The image of inspected gap was processed and analyzed by software of HALCON 12.0. The recognition algorithm for the system is based on principle of deformable template matching. A template should be created from an image of qualified grafted seedling gap. Then the gap image of the grafted seedling will be compared with the created template to determine their matching degree. Based on the similarity between the gap image of grafted seedling and the template, the matching degree will be 0 to 1. The less similar for the grafted seedling gap with the template the smaller of matching degree. Thirdly, the gap will be output as qualified or unqualified. If the matching degree of grafted seedling gap and the template is less than 0.58, or there is no match is found, the gap will be judged as unqualified; otherwise the gap will be qualified. Finally, 100 muskmelon seedlings were grafted and inspected to test the gap inspection system. Results showed that the gap inspection machine-vision system could recognize the gap qualification correctly as 98% of human vision. And the inspection speed of this system can reach 15 seedlings·min-1. The gap inspection process in grafting can be fully automated with this developed machine-vision system, and the gap inspection system will be a key step of a fully-automatic grafting robots.
Transformative Design Pedagogy: A Place-Based and Studio-Based Exploration of Culture
ERIC Educational Resources Information Center
Fay, Lindsey Lawry; Kim, Eun Young
2017-01-01
The discipline of interior design education is committed to providing diverse learning opportunities for examining the topic of culture while implementing design practices that respond to the "needs of "all" humans"(Hadjiyanni, 2013, p. v). In the Council for Interior Design Accreditation's Future Vision results (Council for…
The National Research Council of the United States National Academies of Science has recently released a document outlining a long-range vision and strategy for transforming toxicity testing from largely whole animal-based testing to one based on in vitro assays. “Toxicity Testin...
Bionic Vision-Based Intelligent Power Line Inspection System
Ma, Yunpeng; He, Feijia; Xu, Jinxin
2017-01-01
Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions. PMID:28203269
From wheels to wings with evolutionary spiking circuits.
Floreano, Dario; Zufferey, Jean-Christophe; Nicoud, Jean-Daniel
2005-01-01
We give an overview of the EPFL indoor flying project, whose goal is to evolve neural controllers for autonomous, adaptive, indoor micro-flyers. Indoor flight is still a challenge because it requires miniaturization, energy efficiency, and control of nonlinear flight dynamics. This ongoing project consists of developing a flying, vision-based micro-robot, a bio-inspired controller composed of adaptive spiking neurons directly mapped into digital microcontrollers, and a method to evolve such a neural controller without human intervention. This article describes the motivation and methodology used to reach our goal as well as the results of a number of preliminary experiments on vision-based wheeled and flying robots.
Data acquisition and analysis of range-finding systems for spacing construction
NASA Technical Reports Server (NTRS)
Shen, C. N.
1981-01-01
For space missions of future, completely autonomous robotic machines will be required to free astronauts from routine chores of equipment maintenance, servicing of faulty systems, etc. and to extend human capabilities in hazardous environments full of cosmic and other harmful radiations. In places of high radiation and uncontrollable ambient illuminations, T.V. camera based vision systems cannot work effectively. However, a vision system utilizing directly measured range information with a time of flight laser rangefinder, can successfully operate under these environments. Such a system will be independent of proper illumination conditions and the interfering effects of intense radiation of all kinds will be eliminated by the tuned input of the laser instrument. Processing the range data according to certain decision, stochastic estimation and heuristic schemes, the laser based vision system will recognize known objects and thus provide sufficient information to the robot's control system which can develop strategies for various objectives.
Draper Laboratory small autonomous aerial vehicle
NASA Astrophysics Data System (ADS)
DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.
1997-06-01
The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.
A vision-based fall detection algorithm of human in indoor environment
NASA Astrophysics Data System (ADS)
Liu, Hao; Guo, Yongcai
2017-02-01
Elderly care becomes more and more prominent in China as the population is aging fast and the number of aging population is large. Falls, as one of the biggest challenges in elderly guardianship system, have a serious impact on both physical health and mental health of the aged. Based on feature descriptors, such as aspect ratio of human silhouette, velocity of mass center, moving distance of head and angle of the ultimate posture, a novel vision-based fall detection method was proposed in this paper. A fast median method of background modeling with three frames was also suggested. Compared with the conventional bounding box and ellipse method, the novel fall detection technique is not only applicable for recognizing the fall behaviors end of lying down but also suitable for detecting the fall behaviors end of kneeling down and sitting down. In addition, numerous experiment results showed that the method had a good performance in recognition accuracy on the premise of not adding the cost of time.
First-in-Human Trial of a Novel Suprachoroidal Retinal Prosthesis
Ayton, Lauren N.; Blamey, Peter J.; Guymer, Robyn H.; Luu, Chi D.; Nayagam, David A. X.; Sinclair, Nicholas C.; Shivdasani, Mohit N.; Yeoh, Jonathan; McCombe, Mark F.; Briggs, Robert J.; Opie, Nicholas L.; Villalobos, Joel; Dimitrov, Peter N.; Varsamidis, Mary; Petoe, Matthew A.; McCarthy, Chris D.; Walker, Janine G.; Barnes, Nick; Burkitt, Anthony N.; Williams, Chris E.; Shepherd, Robert K.; Allen, Penelope J.
2014-01-01
Retinal visual prostheses (“bionic eyes”) have the potential to restore vision to blind or profoundly vision-impaired patients. The medical bionic technology used to design, manufacture and implant such prostheses is still in its relative infancy, with various technologies and surgical approaches being evaluated. We hypothesised that a suprachoroidal implant location (between the sclera and choroid of the eye) would provide significant surgical and safety benefits for patients, allowing them to maintain preoperative residual vision as well as gaining prosthetic vision input from the device. This report details the first-in-human Phase 1 trial to investigate the use of retinal implants in the suprachoroidal space in three human subjects with end-stage retinitis pigmentosa. The success of the suprachoroidal surgical approach and its associated safety benefits, coupled with twelve-month post-operative efficacy data, holds promise for the field of vision restoration. Trial Registration Clinicaltrials.gov NCT01603576 PMID:25521292
Kaur, Gurvinder; Koshy, Jacob; Thomas, Satish; Kapoor, Harpreet; Zachariah, Jiju George; Bedi, Sahiba
2016-04-01
Early detection and treatment of vision problems in children is imperative to meet the challenges of childhood blindness. Considering the problems of inequitable distribution of trained manpower and limited access of quality eye care services to majority of our population, innovative community based strategies like 'Teachers training in vision screening' need to be developed for effective utilization of the available human resources. To evaluate the effectiveness of introducing teachers as the first level vision screeners. Teacher training programs were conducted for school teachers to educate them about childhood ocular disorders and the importance of their early detection. Teachers from government and semi-government schools located in Ludhiana were given training in vision screening. These teachers then conducted vision screening of children in their schools. Subsequently an ophthalmology team visited these schools for re-evaluation of children identified with low vision. Refraction was performed for all children identified with refractive errors and spectacles were prescribed. Children requiring further evaluation were referred to the base hospital. The project was done in two phases. True positives, false positives, true negatives and false negatives were calculated for evaluation. In phase 1, teachers from 166 schools underwent training in vision screening. The teachers screened 30,205 children and reported eye problems in 4523 (14.97%) children. Subsequently, the ophthalmology team examined 4150 children and confirmed eye problems in 2137 children. Thus, the teachers were able to correctly identify eye problems (true positives) in 47.25% children. Also, only 13.69% children had to be examined by the ophthalmology team, thus reducing their work load. Similarly, in phase 2, 46.22% children were correctly identified to have eye problems (true positives) by the teachers. By random sampling, 95.65% children were correctly identified as normal (true negatives) by the teachers. Considering the high true negative rates and reasonably good true positive rates and the wider coverage provided by the program, vision screening in schools by teachers is an effective method of identifying children with low vision. This strategy is also valuable in reducing the workload of the eye care staff.
Quantum vision in three dimensions
NASA Astrophysics Data System (ADS)
Roth, Yehuda
We present four models for describing a 3-D vision. Similar to the mirror scenario, our models allow 3-D vision with no need for additional accessories such as stereoscopic glasses or a hologram film. These four models are based on brain interpretation rather than pure objective encryption. We consider the observer "subjective" selection of a measuring device and the corresponding quantum collapse into one of his selected states, as a tool for interpreting reality in according to the observer concepts. This is the basic concept of our study and it is introduced in the first model. Other models suggests "soften" versions that might be much easier to implement. Our quantum interpretation approach contribute to the following fields. In technology the proposed models can be implemented into real devices, allowing 3-D vision without additional accessories. Artificial intelligence: In the desire to create a machine that exchange information by using human terminologies, our interpretation approach seems to be appropriate.
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.
2014-01-01
Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2003-08-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.
Development of an In Flight Vision Self-Assessment Questionnaire for Long Duration Space Missions
NASA Technical Reports Server (NTRS)
Byrne, Vicky E.; Gibson, Charles R.; Pierpoline, Katherine M.
2010-01-01
OVERVIEW A NASA Flight Medicine optometrist teamed with a human factors specialist to develop an electronic questionnaire for crewmembers to record their visual acuity test scores and perceived vision assessment. It will be implemented on the International Space Station (ISS) and administered as part of a suite of tools for early detection of potential vision changes. The goal of this effort was to rapidly develop a set of questions to help in early detection of visual (e.g. blurred vision) and/or non-visual (e.g. headaches) symptoms by allowing the ISS crewmembers to think about their own current vision during their spaceflight missions. PROCESS An iterative process began with a Space Shuttle one-page paper questionnaire generated by the optometrist that was updated by applying human factors design principles. It was used as a baseline to establish an electronic questionnaire for ISS missions. Additional questions needed for the ISS missions were included and the information was organized to take advantage of the computer-based file format available. Human factors heuristics were applied to the prototype and then they were reviewed by the optometrist and procedures specialists with rapid-turn around updates that lead to the final questionnaire. CONCLUSIONS With about only a month lead time, a usable tool to collect crewmember assessments was developed through this cross-discipline collaboration. With only a little expenditure of energy, the potential payoff is great. ISS crewmembers will complete the questionnaire at 30 days into the mission, 100 days into the mission and 30 days prior to return to Earth. The systematic layout may also facilitate physicians later data extraction for quick interpretation of the data. The data collected along with other measures (e.g. retinal and ultrasound imaging) at regular intervals could potentially lead to early detection and treatment of related vision problems than using the other measures alone.
Development of a volumetric projection technique for the digital evaluation of field of view.
Marshall, Russell; Summerskill, Stephen; Cook, Sharon
2013-01-01
Current regulations for field of view requirements in road vehicles are defined by 2D areas projected on the ground plane. This paper discusses the development of a new software-based volumetric field of view projection tool and its implementation within an existing digital human modelling system. In addition, the exploitation of this new tool is highlighted through its use in a UK Department for Transport funded research project exploring the current concerns with driver vision. Focusing specifically on rearwards visibility in small and medium passenger vehicles, the volumetric approach is shown to provide a number of distinct advantages. The ability to explore multiple projections of both direct vision (through windows) and indirect vision (through mirrors) provides a greater understanding of the field of view environment afforded to the driver whilst still maintaining compatibility with the 2D projections of the regulatory standards. Field of view requirements for drivers of road vehicles are defined by simplified 2D areas projected onto the ground plane. However, driver vision is a complex 3D problem. This paper presents the development of a new software-based 3D volumetric projection technique and its implementation in the evaluation of driver vision in small- and medium-sized passenger vehicles.
A color-coded vision scheme for robotics
NASA Technical Reports Server (NTRS)
Johnson, Kelley Tina
1991-01-01
Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.
NASA Astrophysics Data System (ADS)
Kuvychko, Igor
2001-10-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.
Feedback and feedforward adaptation to visuomotor delay during reaching and slicing movements.
Botzer, Lior; Karniel, Amir
2013-07-01
It has been suggested that the brain and in particular the cerebellum and motor cortex adapt to represent the environment during reaching movements under various visuomotor perturbations. It is well known that significant delay is present in neural conductance and processing; however, the possible representation of delay and adaptation to delayed visual feedback has been largely overlooked. Here we investigated the control of reaching movements in human subjects during an imposed visuomotor delay in a virtual reality environment. In the first experiment, when visual feedback was unexpectedly delayed, the hand movement overshot the end-point target, indicating a vision-based feedback control. Over the ensuing trials, movements gradually adapted and became accurate. When the delay was removed unexpectedly, movements systematically undershot the target, demonstrating that adaptation occurred within the vision-based feedback control mechanism. In a second experiment designed to broaden our understanding of the underlying mechanisms, we revealed similar after-effects for rhythmic reversal (out-and-back) movements. We present a computational model accounting for these results based on two adapted forward models, each tuned for a specific modality delay (proprioception or vision), and a third feedforward controller. The computational model, along with the experimental results, refutes delay representation in a pure forward vision-based predictor and suggests that adaptation occurred in the forward vision-based predictor, and concurrently in the state-based feedforward controller. Understanding how the brain compensates for conductance and processing delays is essential for understanding certain impairments concerning these neural delays as well as for the development of brain-machine interfaces. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Xie, Jinyu; Huang, Erjia
2010-01-01
Based on a literature review from English language journals related to the field of human resource development (HRD), the conceptual framework for this study was derived from the models developed by American Society for Training and Development (ASTD) for HRD practice. This study compared and analyzed the similarities and differences in HRD roles,…
Casaroli-Marano, Ricardo P.; Nieto-Nicolau, Núria; Martínez-Conesa, Eva M.; Edel, Michael; Álvarez-Palomo, Ana B.
2015-01-01
The integrity and normal function of the corneal epithelium are crucial for maintaining the cornea’s transparency and vision. The existence of a cell population with progenitor characteristics in the limbus maintains a dynamic of constant epithelial repair and renewal. Currently, cell-based therapies for bio replacement—cultured limbal epithelial transplantation (CLET) and cultured oral mucosal epithelial transplantation (COMET)—present very encouraging clinical results for treating limbal stem cell deficiency (LSCD) and restoring vision. Another emerging therapeutic approach consists of obtaining and implementing human progenitor cells of different origins in association with tissue engineering methods. The development of cell-based therapies using stem cells, such as human adult mesenchymal or induced pluripotent stem cells (IPSCs), represent a significant breakthrough in the treatment of certain eye diseases, offering a more rational, less invasive, and better physiological treatment option in regenerative medicine for the ocular surface. This review will focus on the main concepts of cell-based therapies for the ocular surface and the future use of IPSCs to treat LSCD. PMID:26239129
Mechanisms, functions and ecology of colour vision in the honeybee.
Hempel de Ibarra, N; Vorobyev, M; Menzel, R
2014-06-01
Research in the honeybee has laid the foundations for our understanding of insect colour vision. The trichromatic colour vision of honeybees shares fundamental properties with primate and human colour perception, such as colour constancy, colour opponency, segregation of colour and brightness coding. Laborious efforts to reconstruct the colour vision pathway in the honeybee have provided detailed descriptions of neural connectivity and the properties of photoreceptors and interneurons in the optic lobes of the bee brain. The modelling of colour perception advanced with the establishment of colour discrimination models that were based on experimental data, the Colour-Opponent Coding and Receptor Noise-Limited models, which are important tools for the quantitative assessment of bee colour vision and colour-guided behaviours. Major insights into the visual ecology of bees have been gained combining behavioural experiments and quantitative modelling, and asking how bee vision has influenced the evolution of flower colours and patterns. Recently research has focussed on the discrimination and categorisation of coloured patterns, colourful scenes and various other groupings of coloured stimuli, highlighting the bees' behavioural flexibility. The identification of perceptual mechanisms remains of fundamental importance for the interpretation of their learning strategies and performance in diverse experimental tasks.
A Return to the Human in Humanism: A Response to Hansen's Humanistic Vision
ERIC Educational Resources Information Center
Lemberger, Matthew E.
2012-01-01
In his extension of the humanistic vision, Hansen (2012) recommends that counseling practitioners and scholars adopt operations that are consistent with his definition of a multiple-perspective philosophy. Alternatively, the author of this article believes that Hansen has reduced the capacity of the human to interpret meaning through quantitative…
Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J
2014-09-26
This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.
Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.
2014-01-01
This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956
Human infrared vision is triggered by two-photon chromophore isomerization
Palczewska, Grazyna; Vinberg, Frans; Stremplewski, Patrycjusz; Bircher, Martin P.; Salom, David; Komar, Katarzyna; Zhang, Jianye; Cascella, Michele; Wojtkowski, Maciej; Kefalov, Vladimir J.; Palczewski, Krzysztof
2014-01-01
Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments. PMID:25453064
Scanpath-based analysis of objects conspicuity in context of human vision physiology.
Augustyniak, Piotr
2007-01-01
This paper discusses principal aspects of objects conspicuity investigated with use of an eye tracker and interpreted on the background of human vision physiology. Proper management of objects conspicuity is fundamental in several leading edge applications in the information society like advertisement, web design, man-machine interfacing and ergonomics. Although some common rules of human perception are applied since centuries in the art, the interest of human perception process is motivated today by the need of gather and maintain the recipient attention by putting selected messages in front of the others. Our research uses the visual tasks methodology and series of progressively modified natural images. The modifying details were attributed by their size, color and position while the scanpath-derived gaze points confirmed or not the act of perception. The statistical analysis yielded the probability of detail perception and correlations with the attributes. This probability conforms to the knowledge about the retina anatomy and perception physiology, although we use noninvasive methods only.
BaffleText: a Human Interactive Proof
NASA Astrophysics Data System (ADS)
Chew, Monica; Baird, Henry S.
2003-01-01
Internet services designed for human use are being abused by programs. We present a defense against such attacks in the form of a CAPTCHA (Completely Automatic Public Turing test to tell Computers and Humans Apart) that exploits the difference in ability between humans and machines in reading images of text. CAPTCHAs are a special case of 'human interactive proofs,' a broad class of security protocols that allow people to identify themselves over networks as members of given groups. We point out vulnerabilities of reading-based CAPTCHAs to dictionary and computer-vision attacks. We also draw on the literature on the psychophysics of human reading, which suggests fresh defenses available to CAPTCHAs. Motivated by these considerations, we propose BaffleText, a CAPTCHA which uses non-English pronounceable words to defend against dictionary attacks, and Gestalt-motivated image-masking degradations to defend against image restoration attacks. Experiments on human subjects confirm the human legibility and user acceptance of BaffleText images. We have found an image-complexity measure that correlates well with user acceptance and assists in engineering the generation of challenges to fit the ability gap. Recent computer-vision attacks, run independently by Mori and Jitendra, suggest that BaffleText is stronger than two existing CAPTCHAs.
Image/video understanding systems based on network-symbolic models
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2004-03-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.
Computational models of human vision with applications
NASA Technical Reports Server (NTRS)
Wandell, B. A.
1985-01-01
Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.
Maria Montessori's Cosmic Vision, Cosmic Plan, and Cosmic Education
ERIC Educational Resources Information Center
Grazzini, Camillo
2013-01-01
This classic position of the breadth of Cosmic Education begins with a way of seeing the human's interaction with the world, continues on to the grandeur in scale of time and space of that vision, then brings the interdependency of life where each growing human becomes a participating adult. Mr. Grazzini confronts the laws of human nature in…
Humanoids for lunar and planetary surface operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Keymeulen, Didier; Csaszar, Ambrus; Gan, Quan; Hidalgo, Timothy; Moore, Jeff; Newton, Jason; Sandoval, Steven; Xu, Jiajing
2005-01-01
This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans, for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project in this direction is outlined.
Future In-Space Operations (FISO): A Working Group and Community Engagement
NASA Technical Reports Server (NTRS)
Thronson, Harley; Lester, Dan
2013-01-01
Long-duration human capabilities beyond low Earth orbit (LEO), either in support of or as an alternative to lunar surface operations, have been assessed at least since the late 1960s. Over the next few months, we will present short histories of concepts for long-duration, free-space human habitation beyond LEO from the end of the Apollo program to the Decadal Planning Team (DPT)/NASA Exploration Team (NExT), which was active in 1999 2000 (see Forging a vision: NASA s Decadal Planning Team and the origins of the Vision for Space Exploration , The Space Review, December 19, 2005). Here we summarize the brief existence of the Future In-Space Operations (FISO) working group in 2005 2006 and its successor, a telecon-based colloquium series, which we co-moderate.
Autonomous spacecraft landing through human pre-attentive vision.
Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; de Croon, Guido C H E
2012-06-01
In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting.
ERIC Educational Resources Information Center
Goldman, Martin
1985-01-01
An elementary physical model of cone receptor cells is explained and applied to complexities of human color vision. One-, two-, and three-receptor systems are considered, with the later shown to be the best model for the human eye. Color blindness is also discussed. (DH)
Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffith, Douglas; Greitzer, Frank L.
In his 1960 paper Man-Machine Symbiosis, Licklider predicted that human brains and computing machines will be coupled in a tight partnership that will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today. Today we are on the threshold of resurrecting the vision of symbiosis. While Licklider’s original vision suggested a co-equal relationship, here we discuss an updated vision, neo-symbiosis, in which the human holds a superordinate position in an intelligent human-computer collaborative environment. This paper was originally published as a journal article and is being publishedmore » as a chapter in an upcoming book series, Advances in Novel Approaches in Cognitive Informatics and Natural Intelligence.« less
Genetics Home Reference: color vision deficiency
... PROTAN SERIES TRITANOPIA Sources for This Page Deeb SS. Molecular genetics of color-vision deficiencies. Vis Neurosci. 2004 May-Jun;21(3):191-6. Review. Citation on PubMed Deeb SS. The molecular basis of variation in human color vision. Clin ...
Compact and wide-field-of-view head-mounted display
NASA Astrophysics Data System (ADS)
Uchiyama, Shoichi; Kamakura, Hiroshi; Karasawa, Joji; Sakaguchi, Masafumi; Furihata, Takeshi; Itoh, Yoshitaka
1997-05-01
A compact and wide field of view HMD having 1.32-in full color VGA poly-Si TFT LCDs and simple eyepieces much like LEEP optics has been developed. The total field of view is 80 deg with a 40 deg overlap in its central area. Each optical unit which includes an LCD and eyepiece is 46 mm in diameter and 42 mm in length. The total number of pixels is equivalent to (864 times 3) times 480. This HMD realizes its wide field of view and compact size by having a narrower binocular area (overlap area) than that of commercialized HMDs. For this reason, it is expected that the frequency of monocular vision will be more than that of commercialized HMDs and human natural vision. Therefore, we researched the convergent state of eyes while observing the monocular areas of this HMD by employing an EOG and considered the suitability of this HMD to human vision. As a result, it was found that the convergent state of the monocular vision was nearly equal to that of binocular vision. That is, it can be said that this HMD has the possibility of being well suited to human vision in terms of the convergence.
Human olfaction: a constant state of change-blindness
Sela, Lee
2010-01-01
Paradoxically, although humans have a superb sense of smell, they don’t trust their nose. Furthermore, although human odorant detection thresholds are very low, only unusually high odorant concentrations spontaneously shift our attention to olfaction. Here we suggest that this lack of olfactory awareness reflects the nature of olfactory attention that is shaped by the spatial and temporal envelopes of olfaction. Regarding the spatial envelope, selective attention is allocated in space. Humans direct an attentional spotlight within spatial coordinates in both vision and audition. Human olfactory spatial abilities are minimal. Thus, with no olfactory space, there is no arena for olfactory selective attention. Regarding the temporal envelope, whereas vision and audition consist of nearly continuous input, olfactory input is discreet, made of sniffs widely separated in time. If similar temporal breaks are artificially introduced to vision and audition, they induce “change blindness”, a loss of attentional capture that results in a lack of awareness to change. Whereas “change blindness” is an aberration of vision and audition, the long inter-sniff-interval renders “change anosmia” the norm in human olfaction. Therefore, attentional capture in olfaction is minimal, as is human olfactory awareness. All this, however, does not diminish the role of olfaction through sub-attentive mechanisms allowing subliminal smells a profound influence on human behavior and perception. PMID:20603708
Pre-impact fall detection system using dynamic threshold and 3D bounding box
NASA Astrophysics Data System (ADS)
Otanasap, Nuth; Boonbrahm, Poonpong
2017-02-01
Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.
Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.
Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique
Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.
NASA Astrophysics Data System (ADS)
Kashyap, Vipul
The success of new innovations and technologies are very often disruptive in nature. At the same time, they enable novel next generation infrastructures and solutions. These solutions introduce great efficiencies in the form of efficient processes and the ability to create, organize, share and manage knowledge effectively; and the same time provide crucial enablers for proposing and realizing new visions. In this paper, we propose a new vision of the next generation healthcare enterprise and discuss how Translational Medicine, which aims to improve communication between the basic and clinical sciences, is a key requirement for achieving this vision. This will lead therapeutic insights may be derived from new scientific ideas - and vice versa. Translation research goes from bench to bedside, where theories emerging from preclinical experimentation are tested on disease-affected human subjects, and from bedside to bench, where information obtained from preliminary human experimentation can be used to refine our understanding of the biological principles underpinning the heterogeneity of human disease and polymorphism(s). Informatics and semantic technologies in particular, has a big role to play in making this a reality. We identify critical requirements, viz., data integration, clinical decision support and knowledge maintenance and provenance; and illustrate semantics-based solutions wrt example scenarios and use cases.
Human performance models for computer-aided engineering
NASA Technical Reports Server (NTRS)
Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)
1989-01-01
This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.
Rehabilitation regimes based upon psychophysical studies of prosthetic vision
NASA Astrophysics Data System (ADS)
Chen, S. C.; Suaning, G. J.; Morley, J. W.; Lovell, N. H.
2009-06-01
Human trials of prototype visual prostheses have successfully elicited visual percepts (phosphenes) in the visual field of implant recipients blinded through retinitis pigmentosa and age-related macular degeneration. Researchers are progressing rapidly towards a device that utilizes individual phosphenes as the elementary building blocks to compose a visual scene. This form of prosthetic vision is expected, in the near term, to have low resolution, large inter-phosphene gaps, distorted spatial distribution of phosphenes, restricted field of view, an eccentrically located phosphene field and limited number of expressible luminance levels. In order to fully realize the potential of these devices, there needs to be a training and rehabilitation program which aims to assist the prosthesis recipients to understand what they are seeing, and also to adapt their viewing habits to optimize the performance of the device. Based on the literature of psychophysical studies in simulated and real prosthetic vision, this paper proposes a comprehensive, theoretical training regime for a prosthesis recipient: visual search, visual acuity, reading, face/object recognition, hand-eye coordination and navigation. The aim of these tasks is to train the recipients to conduct visual scanning, eccentric viewing and reading, discerning low-contrast visual information, and coordinating bodily actions for visual-guided tasks under prosthetic vision. These skills have been identified as playing an important role in making prosthetic vision functional for the daily activities of their recipients.
Art, Illusion and the Visual System.
ERIC Educational Resources Information Center
Livingstone, Margaret S.
1988-01-01
Describes the three part system of human vision. Explores the anatomical arrangement of the vision system from the eyes to the brain. Traces the path of various visual signals to their interpretations by the brain. Discusses human visual perception and its implications in art and design. (CW)
Luo, Jiebo; Boutell, Matthew
2005-05-01
Automatic image orientation detection for natural images is a useful, yet challenging research topic. Humans use scene context and semantic object recognition to identify the correct image orientation. However, it is difficult for a computer to perform the task in the same way because current object recognition algorithms are extremely limited in their scope and robustness. As a result, existing orientation detection methods were built upon low-level vision features such as spatial distributions of color and texture. Discrepant detection rates have been reported for these methods in the literature. We have developed a probabilistic approach to image orientation detection via confidence-based integration of low-level and semantic cues within a Bayesian framework. Our current accuracy is 90 percent for unconstrained consumer photos, impressive given the findings of a psychophysical study conducted recently. The proposed framework is an attempt to bridge the gap between computer and human vision systems and is applicable to other problems involving semantic scene content understanding.
Qin, Zong; Ji, Chuangang; Wang, Kai; Liu, Sheng
2012-10-08
In this paper, condition for uniform lighting generated by light emitting diode (LED) array was systematically studied. To take human vision effect into consideration, contrast sensitivity function (CSF) was novelly adopted as critical criterion for uniform lighting instead of conventionally used Sparrow's Criterion (SC). Through CSF method, design parameters including system thickness, LED pitch, LED's spatial radiation distribution and viewing condition can be analytically combined. In a specific LED array lighting system (LALS) with foursquare LED arrangement, different types of LEDs (Lambertian and Batwing type) and given viewing condition, optimum system thicknesses and LED pitches were calculated and compared with those got through SC method. Results show that CSF method can achieve more appropriate optimum parameters than SC method. Additionally, an abnormal phenomenon that uniformity varies with structural parameters non-monotonically in LALS with non-Lambertian LEDs was found and analyzed. Based on the analysis, a design method of LALS that can bring about better practicability, lower cost and more attractive appearance was summarized.
The role of temporal structure in human vision.
Blake, Randolph; Lee, Sang-Hun
2005-03-01
Gestalt psychologists identified several stimulus properties thought to underlie visual grouping and figure/ground segmentation, and among those properties was common fate: the tendency to group together individual objects that move together in the same direction at the same speed. Recent years have witnessed an upsurge of interest in visual grouping based on other time-dependent sources of visual information, including synchronized changes in luminance, in motion direction, and in figure/ ground relations. These various sources of temporal grouping information can be subsumed under the rubric temporal structure. In this article, the authors review evidence bearing on the effectiveness of temporal structure in visual grouping. They start with an overview of evidence bearing on temporal acuity of human vision, covering studies dealing with temporal integration and temporal differentiation. They then summarize psychophysical studies dealing with figure/ground segregation based on temporal phase differences in deterministic and stochastic events. The authors conclude with a brief discussion of neurophysiological implications of these results.
NASA Technical Reports Server (NTRS)
Hart, Sandra G.
1988-01-01
The state-of-the-art helicopter and its pilot are examined using the tools of human-factors analysis. The significant role of human error in helicopter accidents is discussed; the history of human-factors research on helicopters is briefly traced; the typical flight tasks are described; and the noise, vibration, and temperature conditions typical of modern military helicopters are characterized. Also considered are helicopter controls, cockpit instruments and displays, and the impact of cockpit design on pilot workload. Particular attention is given to possible advanced-technology improvements, such as control stabilization and augmentation, FBW and fly-by-light systems, multifunction displays, night-vision goggles, pilot night-vision systems, night-vision displays with superimposed symbols, target acquisition and designation systems, and aural displays. Diagrams, drawings, and photographs are provided.
The role of vision processing in prosthetic vision.
Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette
2012-01-01
Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.
The Role of the Community Nurse in Promoting Health and Human Dignity-Narrative Review Article
Muntean, Ana; Tomita, Mihaela; Ungureanu, Roxana
2013-01-01
Abstract Background: Population health, as defined by WHO in its constitution, is out “a physical, mental and social complete wellbeing”. At the basis of human welfare is the human dignity. This dimension requires an integrated vision of health care. The ecosystemical vision of Bronfenbrenner allows highlighting the unexpected connections between social macro system based on values and the micro system consisting of individual and family. Community nurse is aimed to transgression in practice of education and care, the respect for human dignity, the bonds among values and practices of the community and the physical health of individuals. In Romania, the promotion of community nurse began in 2002, through the project promoting the social inclusion by developing human and institutional resources within community nursery of the National School of Public Health, Management and Education in Healthcare Bucharest. The community nurse became apparent in 10 counties included in the project. Considering the respect for human dignity as an axiomatic value for the community nurse interventions, we stress the need for developing a primary care network in Romania. The proof is based on the analysis of the concept of human dignity within health care, as well as the secondary analysis of health indicators, in the year of 2010, of the 10 counties included in the project. Our conclusions will draw attention to the need of community nurse and, will open directions for new researches and developments needed to promote primary health in Romania. PMID:26060614
Genetic Testing as a New Standard for Clinical Diagnosis of Color Vision Deficiencies.
Davidoff, Candice; Neitz, Maureen; Neitz, Jay
2016-09-01
The genetics underlying inherited color vision deficiencies is well understood: causative mutations change the copy number or sequence of the long (L), middle (M), or short (S) wavelength sensitive cone opsin genes. This study evaluated the potential of opsin gene analyses for use in clinical diagnosis of color vision defects. We tested 1872 human subjects using direct sequencing of opsin genes and a novel genetic assay that characterizes single nucleotide polymorphisms (SNPs) using the MassArray system. Of the subjects, 1074 also were given standard psychophysical color vision tests for a direct comparison with current clinical methods. Protan and deutan deficiencies were classified correctly in all subjects identified by MassArray as having red-green defects. Estimates of defect severity based on SNPs that control photopigment spectral tuning correlated with estimates derived from Nagel anomaloscopy. The MassArray assay provides genetic information that can be useful in the diagnosis of inherited color vision deficiency including presence versus absence, type, and severity, and it provides information to patients about the underlying pathobiology of their disease. The MassArray assay provides a method that directly analyzes the molecular substrates of color vision that could be used in combination with, or as an alternative to current clinical diagnosis of color defects.
Remote-controlled vision-guided mobile robot system
NASA Astrophysics Data System (ADS)
Ande, Raymond; Samu, Tayib; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.
Genetic Testing as a New Standard for Clinical Diagnosis of Color Vision Deficiencies
Davidoff, Candice; Neitz, Maureen; Neitz, Jay
2016-01-01
Purpose The genetics underlying inherited color vision deficiencies is well understood: causative mutations change the copy number or sequence of the long (L), middle (M), or short (S) wavelength sensitive cone opsin genes. This study evaluated the potential of opsin gene analyses for use in clinical diagnosis of color vision defects. Methods We tested 1872 human subjects using direct sequencing of opsin genes and a novel genetic assay that characterizes single nucleotide polymorphisms (SNPs) using the MassArray system. Of the subjects, 1074 also were given standard psychophysical color vision tests for a direct comparison with current clinical methods. Results Protan and deutan deficiencies were classified correctly in all subjects identified by MassArray as having red–green defects. Estimates of defect severity based on SNPs that control photopigment spectral tuning correlated with estimates derived from Nagel anomaloscopy. Conclusions The MassArray assay provides genetic information that can be useful in the diagnosis of inherited color vision deficiency including presence versus absence, type, and severity, and it provides information to patients about the underlying pathobiology of their disease. Translational Relevance The MassArray assay provides a method that directly analyzes the molecular substrates of color vision that could be used in combination with, or as an alternative to current clinical diagnosis of color defects. PMID:27622081
NASA Astrophysics Data System (ADS)
Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul
2018-04-01
Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.
Component-based target recognition inspired by human vision
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Agyepong, Kwabena
2009-05-01
In contrast with machine vision, human can recognize an object from complex background with great flexibility. For example, given the task of finding and circling all cars (no further information) in a picture, you may build a virtual image in mind from the task (or target) description before looking at the picture. Specifically, the virtual car image may be composed of the key components such as driver cabin and wheels. In this paper, we propose a component-based target recognition method by simulating the human recognition process. The component templates (equivalent to the virtual image in mind) of the target (car) are manually decomposed from the target feature image. Meanwhile, the edges of the testing image can be extracted by using a difference of Gaussian (DOG) model that simulates the spatiotemporal response in visual process. A phase correlation matching algorithm is then applied to match the templates with the testing edge image. If all key component templates are matched with the examining object, then this object is recognized as the target. Besides the recognition accuracy, we will also investigate if this method works with part targets (half cars). In our experiments, several natural pictures taken on streets were used to test the proposed method. The preliminary results show that the component-based recognition method is very promising.
GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa
2004-01-01
The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.
An Inquiry-Based Vision Science Activity for Graduate Students and Postdoctoral Research Scientists
NASA Astrophysics Data System (ADS)
Putnam, N. M.; Maness, H. L.; Rossi, E. A.; Hunter, J. J.
2010-12-01
The vision science activity was originally designed for the 2007 Center for Adaptive Optics (CfAO) Summer School. Participants were graduate students, postdoctoral researchers, and professionals studying the basics of adaptive optics. The majority were working in fields outside vision science, mainly astronomy and engineering. The primary goal of the activity was to give participants first-hand experience with the use of a wavefront sensor designed for clinical measurement of the aberrations of the human eye and to demonstrate how the resulting wavefront data generated from these measurements can be used to assess optical quality. A secondary goal was to examine the role wavefront measurements play in the investigation of vision-related scientific questions. In 2008, the activity was expanded to include a new section emphasizing defocus and astigmatism and vision testing/correction in a broad sense. As many of the participants were future post-secondary educators, a final goal of the activity was to highlight the inquiry-based approach as a distinct and effective alternative to traditional laboratory exercises. Participants worked in groups throughout the activity and formative assessment by a facilitator (instructor) was used to ensure that participants made progress toward the content goals. At the close of the activity, participants gave short presentations about their work to the whole group, the major points of which were referenced in a facilitator-led synthesis lecture. We discuss highlights and limitations of the vision science activity in its current format (2008 and 2009 summer schools) and make recommendations for its improvement and adaptation to different audiences.
A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems
NASA Astrophysics Data System (ADS)
Mcfadyen, Aaron; Mejias, Luis
2016-01-01
This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.
ERIC Educational Resources Information Center
Carnie, Fiona, Ed.
Human Scale Education's 1998 conference addressed the creation of schools and learning experiences to foster in young people the attitudes and skills to shape a fairer and more sustainable world. "Values and Vision in Business and Education" (Anita Roddick) argues that educational curricula must contain the language and action of social…
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
2012-01-01
To enhance the quality of the theatre experience, the film industry is interested in achieving higher frame rates for capture and display. In this talk I will describe the basic spatio-temporal sensitivities of human vision, and how they respond to the time sequence of static images that is fundamental to cinematic presentation.
Can Humans Fly Action Understanding with Multiple Classes of Actors
2015-06-08
recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank
Bozzani, Fiammetta Maria; Griffiths, Ulla Kou; Blanchet, Karl; Schmidt, Elena
2014-02-28
VISION 2020 is a global initiative launched in 1999 to eliminate avoidable blindness by 2020. The objective of this study was to undertake a situation analysis of the Zambian eye health system and assess VISION 2020 process indicators on human resources, equipment and infrastructure. All eye health care providers were surveyed to determine location, financing sources, human resources and equipment. Key informants were interviewed regarding levels of service provision, management and leadership in the sector. Policy papers were reviewed. A health system dynamics framework was used to analyse findings. During 2011, 74 facilities provided eye care in Zambia; 39% were public, 37% private for-profit and 24% owned by Non-Governmental Organizations. Private facilities were solely located in major cities. A total of 191 people worked in eye care; 18 of these were ophthalmologists and eight cataract surgeons, equivalent to 0.34 and 0.15 per 250,000 population, respectively. VISION 2020 targets for inpatient beds and surgical theatres were met in six out of nine provinces, but human resources and spectacles manufacturing workshops were below target in every province. Inequalities in service provision between urban and rural areas were substantial. Shortage and maldistribution of human resources, lack of routine monitoring and inadequate financing mechanisms are the root causes of underperformance in the Zambian eye health system, which hinder the ability to achieve the VISION 2020 goals. We recommend that all VISION 2020 process indicators are evaluated simultaneously as these are not individually useful for monitoring progress.
Humanoids in Support of Lunar and Planetary Surface Operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Keymeulen, Didier
2006-01-01
This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project using small-scale Fujitsu HOAP-2 humanoid is outlined.
Report: Unsupervised identification of malaria parasites using computer vision.
Khan, Najeed Ahmed; Pervaz, Hassan; Latif, Arsalan; Musharaff, Ayesha
2017-01-01
Malaria in human is a serious and fatal tropical disease. This disease results from Anopheles mosquitoes that are infected by Plasmodium species. The clinical diagnosis of malaria based on the history, symptoms and clinical findings must always be confirmed by laboratory diagnosis. Laboratory diagnosis of malaria involves identification of malaria parasite or its antigen / products in the blood of the patient. Manual diagnosis of malaria parasite by the pathologists has proven to become cumbersome. Therefore, there is a need of automatic, efficient and accurate identification of malaria parasite. In this paper, we proposed a computer vision based approach to identify the malaria parasite from light microscopy images. This research deals with the challenges involved in the automatic detection of malaria parasite tissues. Our proposed method is based on the pixel-based approach. We used K-means clustering (unsupervised approach) for the segmentation to identify malaria parasite tissues.
An Asset-Based Approach to Tribal Community Energy Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Rachael A.; Martino, Anthony; Begay, Sandra K.
Community energy planning is a vital component of successful energy resource development and project implementation. Planning can help tribes develop a shared vision and strategies to accomplish their energy goals. This paper explores the benefits of an asset-based approach to tribal community energy planning. While a framework for community energy planning and federal funding already exists, some areas of difficulty in the planning cycle have been identified. This paper focuses on developing a planning framework that offsets those challenges. The asset-based framework described here takes inventory of a tribe’s capital assets, such as: land capital, human capital, financial capital, andmore » political capital. Such an analysis evaluates how being rich in a specific type of capital can offer a tribe unique advantages in implementing their energy vision. Finally, a tribal case study demonstrates the practical application of an asset-based framework.« less
Bio-inspired approach for intelligent unattended ground sensors
NASA Astrophysics Data System (ADS)
Hueber, Nicolas; Raymond, Pierre; Hennequin, Christophe; Pichler, Alexander; Perrot, Maxime; Voisin, Philippe; Moeglin, Jean-Pierre
2015-05-01
Improving the surveillance capacity over wide zones requires a set of smart battery-powered Unattended Ground Sensors capable of issuing an alarm to a decision-making center. Only high-level information has to be sent when a relevant suspicious situation occurs. In this paper we propose an innovative bio-inspired approach that mimics the human bi-modal vision mechanism and the parallel processing ability of the human brain. The designed prototype exploits two levels of analysis: a low-level panoramic motion analysis, the peripheral vision, and a high-level event-focused analysis, the foveal vision. By tracking moving objects and fusing multiple criteria (size, speed, trajectory, etc.), the peripheral vision module acts as a fast relevant event detector. The foveal vision module focuses on the detected events to extract more detailed features (texture, color, shape, etc.) in order to improve the recognition efficiency. The implemented recognition core is able to acquire human knowledge and to classify in real-time a huge amount of heterogeneous data thanks to its natively parallel hardware structure. This UGS prototype validates our system approach under laboratory tests. The peripheral analysis module demonstrates a low false alarm rate whereas the foveal vision correctly focuses on the detected events. A parallel FPGA implementation of the recognition core succeeds in fulfilling the embedded application requirements. These results are paving the way of future reconfigurable virtual field agents. By locally processing the data and sending only high-level information, their energy requirements and electromagnetic signature are optimized. Moreover, the embedded Artificial Intelligence core enables these bio-inspired systems to recognize and learn new significant events. By duplicating human expertise in potentially hazardous places, our miniature visual event detector will allow early warning and contribute to better human decision making.
Classification of Normal and Pathological Gait in Young Children Based on Foot Pressure Data.
Guo, Guodong; Guffey, Keegan; Chen, Wenbin; Pergami, Paola
2017-01-01
Human gait recognition, an active research topic in computer vision, is generally based on data obtained from images/videos. We applied computer vision technology to classify pathology-related changes in gait in young children using a foot-pressure database collected using the GAITRite walkway system. As foot positioning changes with children's development, we also investigated the possibility of age estimation based on this data. Our results demonstrate that the data collected by the GAITRite system can be used for normal/pathological gait classification. Combining age information and normal/pathological gait classification increases the accuracy of the classifier. This novel approach could support the development of an accurate, real-time, and economic measure of gait abnormalities in children, able to provide important feedback to clinicians regarding the effect of rehabilitation interventions, and to support targeted treatment modifications.
[The Performance Analysis for Lighting Sources in Highway Tunnel Based on Visual Function].
Yang, Yong; Han, Wen-yuan; Yan, Ming; Jiang, Hai-feng; Zhu, Li-wei
2015-10-01
Under the condition of mesopic vision, the spectral luminous efficiency function is shown as a series of curves. Its peak wavelength and intensity are affected by light spectrum, background brightness and other aspects. The impact of light source to lighting visibility could not be carried out via a single optical parametric characterization. The reaction time of visual cognition is regard as evaluating indexes in this experiment. Under the condition of different speed and luminous environment, testing visual cognition based on vision function method. The light sources include high pressure sodium, electrodeless fluorescent lamp and white LED with three kinds of color temperature (the range of color temperature is from 1 958 to 5 537 K). The background brightness value is used for basic section of highway tunnel illumination and general outdoor illumination, its range is between 1 and 5 cd x m(-)2. All values are in the scope of mesopic vision. Test results show that: under the same condition of speed and luminance, the reaction time of visual cognition that corresponding to high color temperature of light source is shorter than it corresponding to low color temperature; the reaction time corresponding to visual target in high speed is shorter than it in low speed. At the end moment, however, the visual angle of target in observer's visual field that corresponding to low speed was larger than it corresponding to high speed. Based on MOVE model, calculating the equivalent luminance of human mesopic vision, which is on condition of different emission spectrum and background brightness that formed by test lighting sources. Compared with photopic vision result, the standard deviation (CV) of time-reaction curve corresponding to equivalent brightness of mesopic vision is smaller. Under the condition of mesopic vision, the discrepancy between equivalent brightness of different lighting source and photopic vision, that is one of the main reasons for causing the discrepancy of visual recognition. The emission spectrum peak of GaN chip is approximate to the wave length peak of efficiency function in photopic vision. The lighting visual effect of write LED in high color temperature is better than it in low color temperature and electrodeless fluorescent lamp. The lighting visual effect of high pressure sodium is weak. Because of its peak value is around the Na+ characteristic spectra.
Evidence suggests that the estuarine dinoflagellate, Pfiesteria piscicida, and/or morphologically related organisms (Pf-MRO) may release a toxin(s) which kills fish and adversely affects human health. The North Carolina study investigated the potential for persistent health effec...
The challenge of promoting professionalism through medical ethics and humanities education.
Doukas, David J; McCullough, Laurence B; Wear, Stephen; Lehmann, Lisa S; Nixon, Lois LaCivita; Carrese, Joseph A; Shapiro, Johanna F; Green, Michael J; Kirch, Darrell G
2013-11-01
Given recent emphasis on professionalism training in medical schools by accrediting organizations, medical ethics and humanities educators need to develop a comprehensive understanding of this emphasis. To achieve this, the Project to Rebalance and Integrate Medical Education (PRIME) II Workshop (May 2011) enlisted representatives of the three major accreditation organizations to join with a national expert panel of medical educators in ethics, history, literature, and the visual arts. PRIME II faculty engaged in a dialogue on the future of professionalism in medical education. The authors present three overarching themes that resulted from the PRIME II discussions: transformation, question everything, and unity of vision and purpose.The first theme highlights that education toward professionalism requires transformational change, whereby medical ethics and humanities educators would make explicit the centrality of professionalism to the formation of physicians. The second theme emphasizes that the flourishing of professionalism must be based on first addressing the dysfunctional aspects of the current system of health care delivery and financing that undermine the goals of medical education. The third theme focuses on how ethics and humanities educators must have unity of vision and purpose in order to collaborate and identify how their disciplines advance professionalism. These themes should help shape discussions of the future of medical ethics and humanities teaching.The authors argue that improvement of the ethics and humanities-based knowledge, skills, and conduct that fosters professionalism should enhance patient care and be evaluated for its distinctive contributions to educational processes aimed at producing this outcome.
Owls see in stereo much like humans do.
van der Willigen, Robert F
2011-06-10
While 3D experiences through binocular disparity sensitivity have acquired special status in the understanding of human stereo vision, much remains to be learned about how binocularity is put to use in animals. The owl provides an exceptional model to study stereo vision as it displays one of the highest degrees of binocular specialization throughout the animal kingdom. In a series of six behavioral experiments, equivalent to hallmark human psychophysical studies, I compiled an extensive body of stereo performance data from two trained owls. Computer-generated, binocular random-dot patterns were used to ensure pure stereo performance measurements. In all cases, I found that owls perform much like humans do, viz.: (1) disparity alone can evoke figure-ground segmentation; (2) selective use of "relative" rather than "absolute" disparity; (3) hyperacute sensitivity; (4) disparity processing allows for the avoidance of monocular feature detection prior to object recognition; (5) large binocular disparities are not tolerated; (6) disparity guides the perceptual organization of 2D shape. The robustness and very nature of these binocular disparity-based perceptual phenomena bear out that owls, like humans, exploit the third dimension to facilitate early figure-ground segmentation of tangible objects.
The Road to Certainty and Back.
Westheimer, Gerald
2016-10-14
The author relates his intellectual journey from eye-testing clinician to experimental vision scientist. Starting with the quest for underpinning in physics and physiology of vague clinical propositions and of psychology's acceptance of thresholds as "fuzzy-edged," and a long career pursuing a reductionist agenda in empirical vision science, his journey led to the realization that the full understanding of human vision cannot proceed without factoring in an observer's awareness, with its attendant uncertainty and open-endedness. He finds support in the loss of completeness, finality, and certainty revealed in fundamental twentieth-century formulations of mathematics and physics. Just as biology prospered with the introduction of the emergent, nonreductionist concepts of evolution, vision science has to become comfortable accepting data and receiving guidance from human observers' conscious visual experience.
Online Graph Completion: Multivariate Signal Recovery in Computer Vision.
Kim, Won Hwa; Jalal, Mona; Hwang, Seongjae; Johnson, Sterling C; Singh, Vikas
2017-07-01
The adoption of "human-in-the-loop" paradigms in computer vision and machine learning is leading to various applications where the actual data acquisition (e.g., human supervision) and the underlying inference algorithms are closely interwined. While classical work in active learning provides effective solutions when the learning module involves classification and regression tasks, many practical issues such as partially observed measurements, financial constraints and even additional distributional or structural aspects of the data typically fall outside the scope of this treatment. For instance, with sequential acquisition of partial measurements of data that manifest as a matrix (or tensor), novel strategies for completion (or collaborative filtering) of the remaining entries have only been studied recently. Motivated by vision problems where we seek to annotate a large dataset of images via a crowdsourced platform or alternatively, complement results from a state-of-the-art object detector using human feedback, we study the "completion" problem defined on graphs, where requests for additional measurements must be made sequentially. We design the optimization model in the Fourier domain of the graph describing how ideas based on adaptive submodularity provide algorithms that work well in practice. On a large set of images collected from Imgur, we see promising results on images that are otherwise difficult to categorize. We also show applications to an experimental design problem in neuroimaging.
ERIC Educational Resources Information Center
Marshak, David
This book describes the needs and potentials of children and youth from birth to age 21, based on a holistic understanding of what human beings are and can become. The description is based on the insights of three early twentieth-century spiritual teachers--Rudolf Steiner, Aurobindo Ghose, and Inayat Khan--whose works, the book claims, articulate…
Jacobson, Samuel G.; Cideciyan, Artur V.; Peshenko, Igor V.; Sumaroka, Alexander; Olshevskaya, Elena V.; Cao, Lihui; Schwartz, Sharon B.; Roman, Alejandro J.; Olivares, Melani B.; Sadigh, Sam; Yau, King-Wai; Heon, Elise; Stone, Edwin M.; Dizhoor, Alexander M.
2013-01-01
The GUCY2D gene encodes retinal membrane guanylyl cyclase (RetGC1), a key component of the phototransduction machinery in photoreceptors. Mutations in GUCY2D cause Leber congenital amaurosis type 1 (LCA1), an autosomal recessive human retinal blinding disease. The effects of RetGC1 deficiency on human rod and cone photoreceptor structure and function are currently unknown. To move LCA1 closer to clinical trials, we characterized a cohort of patients (ages 6 months—37 years) with GUCY2D mutations. In vivo analyses of retinal architecture indicated intact rod photoreceptors in all patients but abnormalities in foveal cones. By functional phenotype, there were patients with and those without detectable cone vision. Rod vision could be retained and did not correlate with the extent of cone vision or age. In patients without cone vision, rod vision functioned unsaturated under bright ambient illumination. In vitro analyses of the mutant alleles showed that in addition to the major truncation of the essential catalytic domain in RetGC1, some missense mutations in LCA1 patients result in a severe loss of function by inactivating its catalytic activity and/or ability to interact with the activator proteins, GCAPs. The differences in rod sensitivities among patients were not explained by the biochemical properties of the mutants. However, the RetGC1 mutant alleles with remaining biochemical activity in vitro were associated with retained cone vision in vivo. We postulate a relationship between the level of RetGC1 activity and the degree of cone vision abnormality, and argue for cone function being the efficacy outcome in clinical trials of gene augmentation therapy in LCA1. PMID:23035049
2014-01-01
Background VISION 2020 is a global initiative launched in 1999 to eliminate avoidable blindness by 2020. The objective of this study was to undertake a situation analysis of the Zambian eye health system and assess VISION 2020 process indicators on human resources, equipment and infrastructure. Methods All eye health care providers were surveyed to determine location, financing sources, human resources and equipment. Key informants were interviewed regarding levels of service provision, management and leadership in the sector. Policy papers were reviewed. A health system dynamics framework was used to analyse findings. Results During 2011, 74 facilities provided eye care in Zambia; 39% were public, 37% private for-profit and 24% owned by Non-Governmental Organizations. Private facilities were solely located in major cities. A total of 191 people worked in eye care; 18 of these were ophthalmologists and eight cataract surgeons, equivalent to 0.34 and 0.15 per 250,000 population, respectively. VISION 2020 targets for inpatient beds and surgical theatres were met in six out of nine provinces, but human resources and spectacles manufacturing workshops were below target in every province. Inequalities in service provision between urban and rural areas were substantial. Conclusion Shortage and maldistribution of human resources, lack of routine monitoring and inadequate financing mechanisms are the root causes of underperformance in the Zambian eye health system, which hinder the ability to achieve the VISION 2020 goals. We recommend that all VISION 2020 process indicators are evaluated simultaneously as these are not individually useful for monitoring progress. PMID:24575919
Perception-based synthetic cueing for night vision device rotorcraft hover operations
NASA Astrophysics Data System (ADS)
Bachelder, Edward N.; McRuer, Duane
2002-08-01
Helicopter flight using night-vision devices (NVDs) is difficult to perform, as evidenced by the high accident rate associated with NVD flight compared to day operation. The approach proposed in this paper is to augment the NVD image with synthetic cueing, whereby the cues would emulate position and motion and appear to be actually occurring in physical space on which they are overlaid. Synthetic cues allow for selective enhancement of perceptual state gains to match the task requirements. A hover cue set was developed based on an analogue of a physical target used in a flight handling qualities tracking task, a perceptual task analysis for hover, and fundamentals of human spatial perception. The display was implemented on a simulation environment, constructed using a virtual reality device, an ultrasound head-tracker, and a fixed-base helicopter simulator. Seven highly trained helicopter pilots were used as experimental subjects and tasked to maintain hover in the presence of aircraft positional disturbances while viewing a synthesized NVD environment and the experimental hover cues. Significant performance improvements were observed when using synthetic cue augmentation. This paper demonstrates that artificial magnification of perceptual states through synthetic cueing can be an effective method of improving night-vision helicopter hover operations.
Oguntosin, Victoria W; Mori, Yoshiki; Kim, Hyejong; Nasuto, Slawomir J; Kawamura, Sadao; Hayashi, Yoshikatsu
2017-01-01
We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM). Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments.
Oguntosin, Victoria W.; Mori, Yoshiki; Kim, Hyejong; Nasuto, Slawomir J.; Kawamura, Sadao; Hayashi, Yoshikatsu
2017-01-01
We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM). Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments. PMID:28736514
Newell, Ben R
2005-01-01
The appeal of simple algorithms that take account of both the constraints of human cognitive capacity and the structure of environments has been an enduring theme in cognitive science. A novel version of such a boundedly rational perspective views the mind as containing an 'adaptive toolbox' of specialized cognitive heuristics suited to different problems. Although intuitively appealing, when this version was proposed, empirical evidence for the use of such heuristics was scant. I argue that in the light of empirical studies carried out since then, it is time this 'vision of rationality' was revised. An alternative view based on integrative models rather than collections of heuristics is proposed.
NASA Technical Reports Server (NTRS)
Taylor, J. H.
1973-01-01
Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.
Spatial vision in older adults: perceptual changes and neural bases.
McKendrick, Allison M; Chan, Yu Man; Nguyen, Bao N
2018-05-17
The number of older adults is rapidly increasing internationally, leading to a significant increase in research on how healthy ageing impacts vision. Most clinical assessments of spatial vision involve simple detection (letter acuity, grating contrast sensitivity, perimetry). However, most natural visual environments are more spatially complicated, requiring contrast discrimination, and the delineation of object boundaries and contours, which are typically present on non-uniform backgrounds. In this review we discuss recent research that reports on the effects of normal ageing on these more complex visual functions, specifically in the context of recent neurophysiological studies. Recent research has concentrated on understanding the effects of healthy ageing on neural responses within the visual pathway in animal models. Such neurophysiological research has led to numerous, subsequently tested, hypotheses regarding the likely impact of healthy human ageing on specific aspects of spatial vision. Healthy normal ageing impacts significantly on spatial visual information processing from the retina through to visual cortex. Some human data validates that obtained from studies of animal physiology, however some findings indicate that rethinking of presumed neural substrates is required. Notably, not all spatial visual processes are altered by age. Healthy normal ageing impacts significantly on some spatial visual processes (in particular centre-surround tasks), but leaves contrast discrimination, contrast adaptation, and orientation discrimination relatively intact. The study of older adult vision contributes to knowledge of the brain mechanisms altered by the ageing process, can provide practical information regarding visual environments that older adults may find challenging, and may lead to new methods of assessing visual performance in clinical environments. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.
Semiautonomous teleoperation system with vision guidance
NASA Astrophysics Data System (ADS)
Yu, Wai; Pretlove, John R. G.
1998-12-01
This paper describes the ongoing research work on developing a telerobotic system in Mechatronic Systems and Robotics Research group at the University of Surrey. As human operators' manual control of remote robots always suffer from reduced performance and difficulties in perceiving information from the remote site, a system with a certain level of intelligence and autonomy will help to solve some of these problems. Thus, this system has been developed for this purpose. It also serves as an experimental platform to test the idea of using the combination of human and computer intelligence in teleoperation and finding out the optimum balance between them. The system consists of a Polhemus- based input device, a computer vision sub-system and a graphical user interface which communicates the operator with the remote robot. The system description is given in this paper as well as the preliminary experimental results of the system evaluation.
Adaptive design lessons from professional architects
NASA Astrophysics Data System (ADS)
Geiger, Ray W.; Snell, J. T.
1993-09-01
Psychocybernetic systems engineering design conceptualization is mimicking the evolutionary path of habitable environmental design and the professional practice of building architecture, construction, and facilities management. In pursuing better ways to design cellular automata and qualification classifiers in a design process, we have found surprising success in exploring certain more esoteric approaches, e.g., the vision of interdisciplinary artistic discovery in and around creative problem solving. Our evaluation in research into vision and hybrid sensory systems associated with environmental design and human factors has led us to discover very specific connections between the human spirit and quality design. We would like to share those very qualitative and quantitative parameters of engineering design, particularly as it relates to multi-faceted and future oriented design practice. Discussion covers areas of case- based techniques of cognitive ergonomics, natural modeling sources, and an open architectural process of means/goal satisfaction, qualified by natural repetition, gradation, rhythm, contrast, balance, and integrity of process.
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
Algorithms and architectures for robot vision
NASA Technical Reports Server (NTRS)
Schenker, Paul S.
1990-01-01
The scope of the current work is to develop practical sensing implementations for robots operating in complex, partially unstructured environments. A focus in this work is to develop object models and estimation techniques which are specific to requirements of robot locomotion, approach and avoidance, and grasp and manipulation. Such problems have to date received limited attention in either computer or human vision - in essence, asking not only how perception is in general modeled, but also what is the functional purpose of its underlying representations. As in the past, researchers are drawing on ideas from both the psychological and machine vision literature. Of particular interest is the development 3-D shape and motion estimates for complex objects when given only partial and uncertain information and when such information is incrementally accrued over time. Current studies consider the use of surface motion, contour, and texture information, with the longer range goal of developing a fused sensing strategy based on these sources and others.
The color-vision approach to emotional space: cortical evoked potential data.
Boucsein, W; Schaefer, F; Sokolov, E N; Schröder, C; Furedy, J J
2001-01-01
A framework for accounting for emotional phenomena proposed by Sokolov and Boucsein (2000) employs conceptual dimensions that parallel those of hue, brightness, and saturation in color vision. The approach that employs the concepts of emotional quality. intensity, and saturation has been supported by psychophysical emotional scaling data gathered from a few trained observers. We report cortical evoked potential data obtained during the change between different emotions expressed in schematic faces. Twenty-five subjects (13 male, 12 female) were presented with a positive, a negative, and a neutral computer-generated face with random interstimulus intervals in a within-subjects design, together with four meaningful and four meaningless control stimuli made up from the same elements. Frontal, central, parietal, and temporal ERPs were recorded from each hemisphere. Statistically significant outcomes in the P300 and N200 range support the potential fruitfulness of the proposed color-vision-model-based approach to human emotional space.
ERIC Educational Resources Information Center
Wilkinson, John
2013-01-01
Humans have always had the vision to one day live on other planets. This vision existed even before the first person was put into orbit. Since the early space missions of putting humans into orbit around Earth, many advances have been made in space technology. We have now sent many space probes deep into the Solar system to explore the planets and…
Using "Competing Visions of Human Rights" in an International IB World School
ERIC Educational Resources Information Center
Tolley, William J.
2013-01-01
William Tolley, a teaching fellow with the Choices Program, is the Learning and Innovation Coach and head of history at the International School of Curitiba, Brazil (IB). He writes in this article that he has found that the "Competing Visions of Human Rights" teaching unit, developed by Brown University's Choices Program, provides a…
'What' and 'where' in the human brain.
Ungerleider, L G; Haxby, J V
1994-04-01
Multiple visual areas in the cortex of nonhuman primates are organized into two hierarchically organized and functionally specialized processing pathways, a 'ventral stream' for object vision and a 'dorsal stream' for spatial vision. Recent findings from positron emission tomography activation studies have localized these pathways within the human brain, yielding insights into cortical hierarchies, specialization of function, and attentional mechanisms.
Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffith, Douglas; Greitzer, Frank L.
Abstract--The purpose of this paper is to re-address the vision of human-computer symbiosis as originally expressed by J.C.R. Licklider nearly a half-century ago. We describe this vision, place it in some historical context relating to the evolution of human factors research, and we observe that the field is now in the process of re-invigorating Licklider’s vision. We briefly assess the state of the technology within the context of contemporary theory and practice, and we describe what we regard as this emerging field of neo-symbiosis. We offer some initial thoughts on requirements to define functionality of neo-symbiotic systems and discuss researchmore » challenges associated with their development and evaluation.« less
Benchmarking neuromorphic vision: lessons learnt from computer vision
Tan, Cheston; Lallee, Stephane; Orchard, Garrick
2015-01-01
Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120
Food color is in the eye of the beholder: the role of human trichromatic vision in food evaluation.
Foroni, Francesco; Pergola, Giulio; Rumiati, Raffaella Ida
2016-11-14
Non-human primates evaluate food quality based on brightness of red and green shades of color, with red signaling higher energy or greater protein content in fruits and leafs. Despite the strong association between food and other sensory modalities, humans, too, estimate critical food features, such as calorie content, from vision. Previous research primarily focused on the effects of color on taste/flavor identification and intensity judgments. However, whether evaluation of perceived calorie content and arousal in humans are biased by color has received comparatively less attention. In this study we showed that color content of food images predicts arousal and perceived calorie content reported when viewing food even when confounding variables were controlled for. Specifically, arousal positively co-varied with red-brightness, while green-brightness was negatively associated with arousal and perceived calorie content. This result holds for a large array of food comprising of natural food - where color likely predicts calorie content - and of transformed food where, instead, color is poorly diagnostic of energy content. Importantly, this pattern does not emerged with nonfood items. We conclude that in humans visual inspection of food is central to its evaluation and seems to partially engage the same basic system as non-human primates.
Cherry, M I; Bennett, A T
2001-01-01
Despite major differences between human and avian colour vision, previous studies of cuckoo egg mimicry have used human colour vision (or standards based thereon) to assess colour matching. Using ultraviolet-visible reflectance spectrophotometry (300-700 nm), we measured museum collections of eggs of the red-chested cuckoo and its hosts. The first three principal components explained more than 99% of the variance in spectra, and measures of cuckoo host egg similarity derived from these transformations were compared with measures of cuckoo host egg similarity estimated by human observers unaware of the hypotheses we were testing. Monte Carlo methods were used to simulate laying of cuckoo eggs at random in nests. Results showed that host and cuckoo eggs were very highly matched for an ultraviolet versus greenness component, which was not detected by humans. Furthermore, whereas cuckoo and host were dissimilar in achromatic brightness, humans did not detect this difference. Our study thus reveals aspects of cuckoo-host egg colour matching which have hitherto not been described. These results suggest subtleties and complexities in the evolution of host-cuckoo egg mimicry that were not previously suspected. Our results also have the potential to explain the longstanding paradox that some host species accept cuckoo eggs that are non-mimetic to the human eye. PMID:11297172
The Ontology of Vision. The Invisible, Consciousness of Living Matter
Fiorio, Giorgia
2016-01-01
If I close my eyes, the absence of light activates the peripheral cells devoted to the perception of darkness. The awareness of “seeing oneself seeing” is in its essence a thought, one that is internal to the vision and previous to any object of sight. To this amphibious faculty, the “diaphanous color of darkness,” Aristotle assigns the principle of knowledge. “Vision is a whole perceptual system, not a channel of sense.” Functions of vision are interwoven with the texture of human interaction within a terrestrial environment that is in turn contained into the cosmic order. A transitive host within the resonance of an inner-outer environment, the human being is the contact-term between two orders of scale, both bigger and smaller than the individual unity. In the perceptual integrative system of human vision, the convergence-divergence of the corporeal presence and the diffraction of its own appearance is the margin. The sensation of being no longer coincides with the breath of life, it does not seems “real” without the trace of some visible evidence and its simultaneous “sharing”. Without a shadow, without an imprint, the numeric copia of the physical presence inhabits the transient memory of our electronic prostheses. A rudimentary “visuality” replaces tangible experience dissipating its meaning and the awareness of being alive. Transversal to the civilizations of the ancient world, through different orders of function and status, the anthropomorphic “figuration” of archaic sculpture addressees the margin between Being and Non-Being. Statuary human archetypes are not meant to be visible, but to exist as vehicles of transcendence to outlive the definition of human space-time. The awareness of individual finiteness seals the compulsion to “give body” to an invisible apparition shaping the figuration of an ontogenetic expression of human consciousness. Subject and object, the term “humanum” fathoms the relationship between matter and its living dimension, “this de facto vision and the ‘there is’ which it contains.” The project reconsiders the dialectic between the terms vision–presence in the contemporary perception of archaic human statuary according to the transcendent meaning of its immaterial legacy. PMID:27014106
Efficient image enhancement using sparse source separation in the Retinex theory
NASA Astrophysics Data System (ADS)
Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik
2017-11-01
Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.
Vision-Based Real-Time Traversable Region Detection for Mobile Robot in the Outdoors.
Deng, Fucheng; Zhu, Xiaorui; He, Chao
2017-09-13
Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is that of high computational complexity. Hence, this paper proposes a binocular vision-based, real-time solution for detecting traversable region in the outdoors. In the proposed method, an appearance model based on multivariate Gaussian is quickly constructed from a sample region in the left image adaptively determined by the vanishing point and dominant borders. Then, a fast, self-supervised segmentation scheme is proposed to classify the traversable and non-traversable regions. The proposed method is evaluated on public datasets as well as a real mobile robot. Implementation on the mobile robot has shown its ability in the real-time navigation applications.
Lifelong Learning and Vision 2020 in Malaysia.
ERIC Educational Resources Information Center
Leong, Yip Kai
The Malaysian government has adopted the creation of a fully developed economy by the year 2020 as a principal goal, emphasizing that the development should be economic, political, social, spiritual, psychological, and cultural. In order to develop the necessary human resource base to reach this goal, the country must strengthen the teaching of…
Wikipedia and Education: Anarchist Perspectives and Virtual Practices
ERIC Educational Resources Information Center
Jandric, Petar
2010-01-01
This paper explores the philosophical background of one of the most widespread Web based sources used in contemporary education--Wikipedia. Theoretical part consists of the basic notions of anarchist philosophy of education such as human nature, work and society. Through Chomsky's prism of visions and goals, it provides the frame for further…
Comparing visual representations across human fMRI and computational vision
Leeds, Daniel D.; Seibert, Darren A.; Pyles, John A.; Tarr, Michael J.
2013-01-01
Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from “interest points,” was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation. PMID:24273227
Posture recognition based on fuzzy logic for home monitoring of the elderly.
Brulin, Damien; Benezeth, Yannick; Courtial, Estelle
2012-09-01
We propose in this paper a computer vision-based posture recognition method for home monitoring of the elderly. The proposed system performs human detection prior to the posture analysis; posture recognition is performed only on a human silhouette. The human detection approach has been designed to be robust to different environmental stimuli. Thus, posture is analyzed with simple and efficient features that are not designed to manage constraints related to the environment but only designed to describe human silhouettes. The posture recognition method, based on fuzzy logic, identifies four static postures and is robust to variation in the distance between the camera and the person, and to the person's morphology. With an accuracy of 74.29% of satisfactory posture recognition, this approach can detect emergency situations such as a fall within a health smart home.
Humans and Deep Networks Largely Agree on Which Kinds of Variation Make Object Recognition Harder.
Kheradpisheh, Saeed R; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée
2016-01-01
View-invariant object recognition is a challenging problem that has attracted much attention among the psychology, neuroscience, and computer vision communities. Humans are notoriously good at it, even if some variations are presumably more difficult to handle than others (e.g., 3D rotations). Humans are thought to solve the problem through hierarchical processing along the ventral stream, which progressively extracts more and more invariant visual features. This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best models for object recognition in natural images. Here, for the first time, we systematically compared human feed-forward vision and DCNNs at view-invariant object recognition task using the same set of images and controlling the kinds of transformation (position, scale, rotation in plane, and rotation in depth) as well as their magnitude, which we call "variation level." We used four object categories: car, ship, motorcycle, and animal. In total, 89 human subjects participated in 10 experiments in which they had to discriminate between two or four categories after rapid presentation with backward masking. We also tested two recent DCNNs (proposed respectively by Hinton's group and Zisserman's group) on the same tasks. We found that humans and DCNNs largely agreed on the relative difficulties of each kind of variation: rotation in depth is by far the hardest transformation to handle, followed by scale, then rotation in plane, and finally position (much easier). This suggests that DCNNs would be reasonable models of human feed-forward vision. In addition, our results show that the variation levels in rotation in depth and scale strongly modulate both humans' and DCNNs' recognition performances. We thus argue that these variations should be controlled in the image datasets used in vision research.
VLSI chips for vision-based vehicle guidance
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1994-02-01
Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.
Human microbiome science: vision for the future, Bethesda, MD, July 24 to 26, 2013
2014-01-01
A conference entitled ‘Human microbiome science: Vision for the future’ was organized in Bethesda, MD from July 24 to 26, 2013. The event brought together experts in the field of human microbiome research and aimed at providing a comprehensive overview of the state of microbiome research, but more importantly to identify and discuss gaps, challenges and opportunities in this nascent field. This report summarizes the presentations but also describes what is needed for human microbiome research to move forward and deliver medical translational applications.
High-performance lighting evaluated by photobiological parameters.
Rebec, Katja Malovrh; Gunde, Marta Klanjšek
2014-08-10
The human reception of light includes image-forming and non-image-forming effects which are triggered by spectral distribution and intensity of light. Ideal lighting is similar to daylight, which could be evaluated by spectral or chromaticity match. LED-based and CFL-based lighting were analyzed here, proposed according to spectral and chromaticity match, respectively. The photobiological effects were expressed by effectiveness for blue light hazard, cirtopic activity, and photopic vision. Good spectral match provides light with more similar effects to those obtained by the chromaticity match. The new parameters are useful for better evaluation of complex human responses caused by lighting.
A Vision for the Exploration of Mars: Robotic Precursors Followed by Humans to Mars Orbit in 2033
NASA Technical Reports Server (NTRS)
Sellers, Piers J.; Garvin, James B.; Kinney, Anne L.; Amato, Michael J.; White, Nicholas E.
2012-01-01
The reformulation of the Mars program gives NASA a rare opportunity to deliver a credible vision in which humans, robots, and advancements in information technology combine to open the deep space frontier to Mars. There is a broad challenge in the reformulation of the Mars exploration program that truly sets the stage for: 'a strategic collaboration between the Science Mission Directorate (SMD), the Human Exploration and Operations Mission Directorate (HEOMD) and the Office of the Chief Technologist, for the next several decades of exploring Mars'.Any strategy that links all three challenge areas listed into a true long term strategic program necessitates discussion. NASA's SMD and HEOMD should accept the President's challenge and vision by developing an integrated program that will enable a human expedition to Mars orbit in 2033 with the goal of returning samples suitable for addressing the question of whether life exists or ever existed on Mars
Simulation Based Acquisition for NASA's Office of Exploration Systems
NASA Technical Reports Server (NTRS)
Hale, Joe
2004-01-01
In January 2004, President George W. Bush unveiled his vision for NASA to advance U.S. scientific, security, and economic interests through a robust space exploration program. This vision includes the goal to extend human presence across the solar system, starting with a human return to the Moon no later than 2020, in preparation for human exploration of Mars and other destinations. In response to this vision, NASA has created the Office of Exploration Systems (OExS) to develop the innovative technologies, knowledge, and infrastructures to explore and support decisions about human exploration destinations, including the development of a new Crew Exploration Vehicle (CEV). Within the OExS organization, NASA is implementing Simulation Based Acquisition (SBA), a robust Modeling & Simulation (M&S) environment integrated across all acquisition phases and programs/teams, to make the realization of the President s vision more certain. Executed properly, SBA will foster better informed, timelier, and more defensible decisions throughout the acquisition life cycle. By doing so, SBA will improve the quality of NASA systems and speed their development, at less cost and risk than would otherwise be the case. SBA is a comprehensive, Enterprise-wide endeavor that necessitates an evolved culture, a revised spiral acquisition process, and an infrastructure of advanced Information Technology (IT) capabilities. SBA encompasses all project phases (from requirements analysis and concept formulation through design, manufacture, training, and operations), professional disciplines, and activities that can benefit from employing SBA capabilities. SBA capabilities include: developing and assessing system concepts and designs; planning manufacturing, assembly, transport, and launch; training crews, maintainers, launch personnel, and controllers; planning and monitoring missions; responding to emergencies by evaluating effects and exploring solutions; and communicating across the OExS enterprise, within the Government, and with the general public. The SBA process features empowered collaborative teams (including industry partners) to integrate requirements, acquisition, training, operations, and sustainment. The SBA process also utilizes an increased reliance on and investment in M&S to reduce design risk. SBA originated as a joint Industry and Department of Defense (DoD) initiative to define and integrate an acquisition process that employs robust, collaborative use of M&S technology across acquisition phases and programs. The SBA process was successfully implemented in the Air Force s Joint Strike Fighter (JSF) Program.
Visions for Space Exploration: ILS Issues and Approaches
NASA Technical Reports Server (NTRS)
Watson, Kevin
2005-01-01
This viewgraph presentation reviews some of the logistic issues that the Vision for Space Exploration will entail. There is a review of the vision and the timeline for the return to the moon that will lead to the first human exploration of Mars. The lessons learned from the International Space Station (ISS) and other such missions are also reviewed.
NASA Astrophysics Data System (ADS)
Lauinger, N.
2007-09-01
A better understanding of the color constancy mechanism in human color vision [7] can be reached through analyses of photometric data of all illuminants and patches (Mondrians or other visible objects) involved in visual experiments. In Part I [3] and in [4, 5 and 6] the integration in the human eye of the geometrical-optical imaging hardware and the diffractive-optical hardware has been described and illustrated (Fig.1). This combined hardware represents the main topic of the NAMIROS research project (nano- and micro- 3D gratings for optical sensors) [8] promoted and coordinated by Corrsys 3D Sensors AG. The hardware relevant to (photopic) human color vision can be described as a diffractive or interference-optical correlator transforming incident light into diffractive-optical RGB data and relating local RGB onto global RGB data in the near-field behind the 'inverted' human retina. The relative differences at local/global RGB interference-optical contrasts are available to photoreceptors (cones and rods) only after this optical pre-processing.
Model-based video segmentation for vision-augmented interactive games
NASA Astrophysics Data System (ADS)
Liu, Lurng-Kuo
2000-04-01
This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.
Focal damage to macaque photoreceptors produces persistent visual loss
Strazzeri, Jennifer M.; Hunter, Jennifer J.; Masella, Benjamin D.; Yin, Lu; Fischer, William S.; DiLoreto, David A.; Libby, Richard T.; Williams, David R.; Merigan, William H.
2014-01-01
Insertion of light-gated channels into inner retina neurons restores neural light responses, light evoked potentials, visual optomotor responses and visually-guided maze behavior in mice blinded by retinal degeneration. This method of vision restoration bypasses damaged outer retina, providing stimulation directly to retinal ganglion cells in inner retina. The approach is similar to that of electronic visual protheses, but may offer some advantages, such as avoidance of complex surgery and direct targeting of many thousands of neurons. However, the promise of this technique for restoring human vision remains uncertain because rodent animal models, in which it has been largely developed, are not ideal for evaluating visual perception. On the other hand, psychophysical vision studies in macaque can be used to evaluate different approaches to vision restoration in humans. Furthermore, it has not been possible to test vision restoration in macaques, the optimal model for human-like vision, because there has been no macaque model of outer retina degeneration. In this study, we describe development of a macaque model of photoreceptor degeneration that can in future studies be used to test restoration of perception by visual prostheses. Our results show that perceptual deficits caused by focal light damage are restricted to locations at which photoreceptors are damaged, that optical coherence tomography (OCT) can be used to track such lesions, and that adaptive optics retinal imaging, which we recently used for in vivo recording of ganglion cell function, can be used in future studies to examine these lesions. PMID:24316158
Connectionist model-based stereo vision for telerobotics
NASA Technical Reports Server (NTRS)
Hoff, William; Mathis, Donald
1989-01-01
Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.
NASA Astrophysics Data System (ADS)
Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang
2017-07-01
The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.
Frontiers in Human Information Processing Conference
2008-02-25
Frontiers in Human Information Processing - Vision, Attention , Memory , and Applications: A Tribute to George Sperling, a Festschrift. We are grateful...with focus on the formal, computational, and mathematical approaches that unify the areas of vision, attention , and memory . The conference also...Information Processing Conference Final Report AFOSR GRANT # FA9550-07-1-0346 The AFOSR Grant # FA9550-07-1-0346 provided partial support for the Conference
Sturz, Bradley R; Green, Marshall L; Gaskin, Katherine A; Evans, Alicia C; Graves, April A; Roberts, Jonathan E
2013-02-15
View-based matching theories of orientation suggest that mobile organisms encode a visual memory consisting of a visual panorama from a target location and maneuver to reduce discrepancy between current visual perception and this stored visual memory to return to a location. Recent success of such theories to explain the orientation behavior of insects and birds raises questions regarding the extent to which such an explanation generalizes to other species. In the present study, we attempted to determine the extent to which such view-based matching theories may explain the orientation behavior of a mammalian species (in this case adult humans). We modified a traditional enclosure orientation task so that it involved only the use of the haptic sense. The use of a haptic orientation task to investigate the extent to which view-based matching theories may explain the orientation behavior of adult humans appeared ideal because it provided an opportunity for us to explicitly prohibit the use of vision. Specifically, we trained disoriented and blindfolded human participants to search by touch for a target object hidden in one of four locations marked by distinctive textural cues located on top of four discrete landmarks arranged in a rectangular array. Following training, we removed the distinctive textural cues and probed the extent to which participants learned the geometry of the landmark array. In the absence of vision and the trained textural cues, participants showed evidence that they learned the geometry of the landmark array. Such evidence cannot be explained by an appeal to view-based matching strategies and is consistent with explanations of spatial orientation related to the incidental learning of environmental geometry.
Program Review Report on the College of Dentistry, University of Florida.
ERIC Educational Resources Information Center
Christiansen, Richard
The programs offered by the College of Dentistry at the University of Florida (UF) were reviewed by an outside consultant in order to provide information on the State University System's vision of the college and its mission for Florida, the support base for the program, and current directions and anticipated fiscal and human forces that help…
ERIC Educational Resources Information Center
Zoreda, Margaret Lee; Zoreda-Lozano, Juan Jose
This paper argues that current "postmodern" conditions demand a rethinking of higher education, especially in countries like Mexico. This would involve the rapprochement of science, technology, and the humanities, based on the belief that the chasm separating them, apart from being artificial, is no longer socially viable. Through a…
Modelling Subjectivity in Visual Perception of Orientation for Image Retrieval.
ERIC Educational Resources Information Center
Sanchez, D.; Chamorro-Martinez, J.; Vila, M. A.
2003-01-01
Discussion of multimedia libraries and the need for storage, indexing, and retrieval techniques focuses on the combination of computer vision and data mining techniques to model high-level concepts for image retrieval based on perceptual features of the human visual system. Uses fuzzy set theory to measure users' assessments and to capture users'…
NASA Human Health and Performance Strategy
NASA Technical Reports Server (NTRS)
Davis, Jeffrey R.
2012-01-01
In May 2007, what was then the Space Life Sciences Directorate, issued the 2007 Space Life Sciences Strategy for Human Space Exploration. In January 2012, leadership and key directorate personnel were once again brought together to assess the current and expected future environment against its 2007 Strategy and the Agency and Johnson Space Center goals and strategies. The result was a refined vision and mission, and revised goals, objectives, and strategies. One of the first changes implemented was to rename the directorate from Space Life Sciences to Human Health and Performance to better reflect our vision and mission. The most significant change in the directorate from 2007 to the present is the integration of the Human Research Program and Crew Health and Safety activities. Subsequently, the Human Health and Performance Directorate underwent a reorganization to achieve enhanced integration of research and development with operations to better support human spaceflight and International Space Station utilization. These changes also enable a more effective and efficient approach to human system risk mitigation. Since 2007, we have also made significant advances in external collaboration and implementation of new business models within the directorate and the Agency, and through two newly established virtual centers, the NASA Human Health and Performance Center and the Center of Excellence for Collaborative Innovation. Our 2012 Strategy builds upon these successes to address the Agency s increased emphasis on societal relevance and being a leader in research and development and innovative business and communications practices. The 2012 Human Health and Performance Vision is to lead the world in human health and performance innovations for life in space and on Earth. Our mission is to enable optimization of human health and performance throughout all phases of spaceflight. All HHPD functions are ultimately aimed at achieving this mission. Our activities enable mission success, optimizing human health and productivity in space before, during, and after the actual spaceflight experience of our crews, and include support for ground-based functions. Many of our spaceflight innovations also provide solutions for terrestrial challenges, thereby enhancing life on Earth. Our strategic goals are aimed at leading human exploration and ISS utilization, leading human health and performance internationally, excelling in management and advancement of innovations in health and human system integration, and expanding relevance to life on Earth and creating enduring support and enthusiasm for space exploration.
NASA Human Health and Performance Strategy
NASA Technical Reports Server (NTRS)
Davis, Jeffrey R.
2012-01-01
In May 2007, what was then the Space Life Sciences Directorate, issued the 2007 Space Life Sciences Strategy for Human Space Exploration. In January 2012, leadership and key directorate personnel were once again brought together to assess the current and expected future environment against its 2007 Strategy and the Agency and Johnson Space Center goals and strategies. The result was a refined vision and mission, and revised goals, objectives, and strategies. One of the first changes implemented was to rename the directorate from Space Life Sciences to Human Health and Performance to better reflect our vision and mission. The most significant change in the directorate from 2007 to the present is the integration of the Human Research Program and Crew Health and Safety activities. Subsequently, the Human Health and Performance Directorate underwent a reorganization to achieve enhanced integration of research and development with operations to better support human spaceflight and International Space Station utilization. These changes also enable a more effective and efficient approach to human system risk mitigation. Since 2007, we have also made significant advances in external collaboration and implementation of new business models within the directorate and the Agency, and through two newly established virtual centers, the NASA Human Health and Performance Center and the Center of Excellence for Collaborative Innovation. Our 2012 Strategy builds upon these successes to address the Agency's increased emphasis on societal relevance and being a leader in research and development and innovative business and communications practices. The 2012 Human Health and Performance Vision is to lead the world in human health and performance innovations for life in space and on Earth. Our mission is to enable optimization of human health and performance throughout all phases of spaceflight. All HH&P functions are ultimately aimed at achieving this mission. Our activities enable mission success, optimizing human health and productivity in space before, during, and after the actual spaceflight experience of our crews, and include support for ground-- based functions. Many of our spaceflight innovations also provide solutions for terrestrial challenges, thereby enhancing life on Earth. Our strategic goals are aimed at leading human exploration and ISS utilization, leading human health and performance internationally, excelling in management and advancement of innovations in health and human system integration, and expanding relevance to life on Earth and creating enduring support and enthusiasm for space exploration.
Freedman, Lynn P
2002-01-01
"Rights-based" approaches fold human rights principles into the ongoing work of health policy making and programming. The example of delegation of anesthesia provision for emergency obstetric care is used to demonstrate how a rights-based approach, applied to this problem in the context of high-mortality countries, requires decision makers to shift from an individual, ethics-based, clinical perspective to a structural, rights-based, public health perspective. This fluid and context-sensitive approach to human rights also applies at the international level, where the direction of overall maternal mortality reduction strategy is set. By contrasting family planning programs and maternal mortality programs, this commentary argues for choosing the human rights approach that speaks most effectively to the power dynamics underlying the particular health problem being addressed. In the case of maternal death in high-mortality countries, this means a strategic focus on the health care system itself.
Safe traffic : Vision Zero on the move
DOT National Transportation Integrated Search
2006-03-01
Vision Zero is composed of several basic : elements, each of which affects safety in : road traffic. These concerns ethics, human : capability and tolerance, responsibility, : scientific facts and a realisation that the : different components in the ...
A methodology for coupling a visual enhancement device to human visual attention
NASA Astrophysics Data System (ADS)
Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman
2009-02-01
The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.
An operator interface design for a telerobotic inspection system
NASA Technical Reports Server (NTRS)
Kim, Won S.; Tso, Kam S.; Hayati, Samad
1993-01-01
The operator interface has recently emerged as an important element for efficient and safe interactions between human operators and telerobotics. Advances in graphical user interface and graphics technologies enable us to produce very efficient operator interface designs. This paper describes an efficient graphical operator interface design newly developed for remote surface inspection at NASA-JPL. The interface, designed so that remote surface inspection can be performed by a single operator with an integrated robot control and image inspection capability, supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.
Color vision test for dichromatic and trichromatic macaque monkeys.
Koida, Kowa; Yokoi, Isao; Okazawa, Gouki; Mikami, Akichika; Widayati, Kanthi Arum; Miyachi, Shigehiro; Komatsu, Hidehiko
2013-11-01
Dichromacy is a color vision defect in which one of the three cone photoreceptors is absent. Individuals with dichromacy are called dichromats (or sometimes "color-blind"), and their color discrimination performance has contributed significantly to our understanding of color vision. Macaque monkeys, which normally have trichromatic color vision that is nearly identical to humans, have been used extensively in neurophysiological studies of color vision. In the present study we employed two tests, a pseudoisochromatic color discrimination test and a monochromatic light detection test, to compare the color vision of genetically identified dichromatic macaques (Macaca fascicularis) with that of normal trichromatic macaques. In the color discrimination test, dichromats could not discriminate colors along the protanopic confusion line, though trichromats could. In the light detection test, the relative thresholds for longer wavelength light were higher in the dichromats than the trichromats, indicating dichromats to be less sensitive to longer wavelength light. Because the dichromatic macaque is very rare, the present study provides valuable new information on the color vision behavior of dichromatic macaques, which may be a useful animal model of human dichromacy. The behavioral tests used in the present study have been previously used to characterize the color behaviors of trichromatic as well as dichromatic new world monkeys. The present results show that comparative studies of color vision employing similar tests may be feasible to examine the difference in color behaviors between trichromatic and dichromatic individuals, although the genetic mechanisms of trichromacy/dichromacy is quite different between new world monkeys and macaques.
Recognizing 3 D Objects from 2D Images Using Structural Knowledge Base of Genetic Views
1988-08-31
technical report. [BIE85] I. Biederman , "Human image understanding: Recent research and a theory", Computer Vision, Graphics, and Image Processing, vol...model bases", Technical Report 87-85, COINS Dept, University of Massachusetts, Amherst, MA 01003, August 1987 . [BUR87b) Burns, J. B. and L. J. Kitchen...34Recognition in 2D images of 3D objects from large model bases using prediction hierarchies", Proc. IJCAI-10, 1987 . [BUR891 J. B. Burns, forthcoming
Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition
NASA Astrophysics Data System (ADS)
Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso
2005-04-01
Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.
An interactive framework for acquiring vision models of 3-D objects from 2-D images.
Motai, Yuichi; Kak, Avinash
2004-02-01
This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.
NASA Technical Reports Server (NTRS)
Prinzel, L.J.; Kramer, L.J.
2009-01-01
A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.
Establishing a shared vision in your organization. Winning strategies to empower your team members.
Rinke, W J
1989-01-01
Today's health-care climate demands that you manage your human resources more effectively. Meeting the dual challenges of providing more with less requires that you tap the vast hidden resources that reside in every one of your team members. Harnessing these untapped energies requires that all of your employees clearly understand the purpose, direction, and the desired future state of your laboratory. Once this image is widely shared, your team members will know their roles in the organization and the contributions they can make to attaining the organization's vision. This shared vision empowers people and enhances their self-esteem as they recognize they are accomplishing a worthy goal. You can create and install a shared vision in your laboratory by adhering to a five-step process. The result will be a unity of purpose that will release the untapped human resources in your organization so that you can do more with less.
B. F. Skinner's Utopian Vision: Behind and Beyond Walden Two
Altus, Deborah E; Morris, Edward K
2009-01-01
This paper addresses B. F. Skinner's utopian vision for enhancing social justice and human well-being in his 1948 novel, Walden Two. In the first part, we situate the book in its historical, intellectual, and social context of the utopian genre, address critiques of the book's premises and practices, and discuss the fate of intentional communities patterned on the book. The central point here is that Skinner's utopian vision was not any of Walden Two's practices, except one: the use of empirical methods to search for and discover practices that worked. In the second part, we describe practices in Skinner's book that advance social justice and human well-being under the themes of health, wealth, and wisdom, and then show how the subsequent literature in applied behavior analysis supports Skinner's prescience. Applied behavior analysis is a measure of the success of Skinner's utopian vision: to experiment. PMID:22478531
Intelligent Vision On The SM9O Mini-Computer Basis And Applications
NASA Astrophysics Data System (ADS)
Hawryszkiw, J.
1985-02-01
Distinction has to be made between image processing and vision Image processing finds its roots in the strong tradition of linear signal processing and promotes geometrical transform techniques, such as fi I tering , compression, and restoration. Its purpose is to transform an image for a human observer to easily extract from that image information significant for him. For example edges after a gradient operator, or a specific direction after a directional filtering operation. Image processing consists in fact in a set of local or global space-time transforms. The interpretation of the final image is done by the human observer. The purpose of vision is to extract the semantic content of the image. The machine can then understand that content, and run a process of decision, which turns into an action. Thus, intel I i gent vision depends on - Image processing - Pattern recognition - Artificial intel I igence
Kriegeskorte, Nikolaus
2015-11-24
Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.
Improvement in vision: a new goal for treatment of hereditary retinal degenerations
Jacobson, Samuel G; Cideciyan, Artur V; Aguirre, Gustavo D; Roman, Alejandro J; Sumaroka, Alexander; Hauswirth, William W; Palczewski, Krzysztof
2015-01-01
Introduction: Inherited retinal degenerations (IRDs) have long been considered untreatable and incurable. Recently, one form of early-onset autosomal recessive IRD, Leber congenital amaurosis (LCA) caused by mutations in RPE65 (retinal pigment epithelium-specific protein 65 kDa) gene, has responded with some improvement of vision to gene augmentation therapy and oral retinoid administration. This early success now requires refinement of such therapeutics to fully realize the impact of these major scientific and clinical advances. Areas covered: Progress toward human therapy for RPE65-LCA is detailed from the understanding of molecular mechanisms to preclinical proof-of-concept research to clinical trials. Unexpected positive and complicating results in the patients receiving treatment are explained. Logical next steps to advance the clinical value of the therapeutics are suggested. Expert opinion: The first molecularly based early-phase therapies for an IRD are remarkably successful in that vision has improved and adverse events are mainly associated with surgical delivery to the subretinal space. Yet, there are features of the gene augmentation therapeutic response, such as slowed kinetics of night vision, lack of foveal cone function improvement and relentlessly progressive retinal degeneration despite therapy, that still require research attention. PMID:26246977
On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics
NASA Astrophysics Data System (ADS)
Zhu, Zhongjie; Wang, Yuer; Bai, Yongqiang; Jiang, Gangyi
2010-12-01
The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D) model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS) such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.
NASA Astrophysics Data System (ADS)
Presti, Giovambattista; Messina, Concetta; Mongelli, Francesca; Sireci, Maria Josè; Collotta, Mario
2017-11-01
Relational Frame Theory is a post-skinnerian theory of language and cognition based on more than thirty years of basic and applied research. It defines language and cognitive skills as an operant repertoire of responses to arbitrarily related stimuli specific, as far as is now known, of the human species. RFT has been proved useful in addressing cognitive barriers to human action in psychotherapy and also improving children skills in reading, IQ testing, and in metaphoric and categorical repertoires. We present a frame of action where RFT can be used in programming software to help autistic children to develop cognitive skills within a developmental vision.
Utilization of the Space Vision System as an Augmented Reality System For Mission Operations
NASA Technical Reports Server (NTRS)
Maida, James C.; Bowen, Charles
2003-01-01
Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to flight hardware capable of utilizing this technology. This is the basis for this proposed Space Human Factors Engineering project, the determination of the display symbology within the performance limits of the Space Vision System that will objectively improve human performance. This utilization of existing flight hardware will greatly reduce the costs of implementation for flight. Besides being used onboard shuttle and space station and as a ground-based system for mission operational support, it also has great potential for science and medical training and diagnostics, remote learning, team learning, video/media conferencing, and educational outreach.
Ghias, Kulsoom; Khan, Kausar S; Ali, Rukhsana; Azfar, Shireen; Ahmed, Rashida
2016-01-01
Aga Khan University, a private medical college, had a vision of producing physicians who are not only scientifically competent, but also socially sensitive, the latter by exposure of medical students to a broad-based curriculum. The objective of this study was to identify the genesis of broad-based education and its integration into the undergraduate medical education program as the Humanities and Social Sciences (HASS) course. A qualitative methodology was used for this study. Sources of data included document review and in-depth key informant interviews. Nvivo software was utilized to extract themes. The study revealed the process of operationalization of the institutional vision to produce competent and culturally sensitive physicians. The delay in the establishment of the Faculty of Arts and Sciences, which was expected to take a lead role in the delivery of a broad-based education, led to the development of an innovative HASS course in the medical curriculum. The study also identified availability of faculty and resistance from students as challenges faced in the implementation and evolution of HASS. The description of the journey and viability of integration of HASS into the medical curriculum offers a model to medical colleges seeking ways to produce socially sensitive physicians.
Atallah, Vincent; Escarmant, Patrick; Vinh‐Hung, Vincent
2016-01-01
Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in‐house‐made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real‐time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high‐contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep‐breathing patterns. This low‐cost, computer‐vision system for real‐time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion. PACS number(s): 87.55.km PMID:27685116
Leduc, Nicolas; Atallah, Vincent; Escarmant, Patrick; Vinh-Hung, Vincent
2016-09-08
Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in-house-made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real-time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high-contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep-breathing patterns. This low-cost, computer-vision system for real-time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion.v. © 2016 The Authors.
Ghias, Kulsoom; Khan, Kausar S; Ali, Rukhsana; Azfar, Shireen; Ahmed, Rashida
2016-01-01
Objective: Aga Khan University, a private medical college, had a vision of producing physicians who are not only scientifically competent, but also socially sensitive, the latter by exposure of medical students to a broad-based curriculum. The objective of this study was to identify the genesis of broad-based education and its integration into the undergraduate medical education program as the Humanities and Social Sciences (HASS) course. Methods: A qualitative methodology was used for this study. Sources of data included document review and in-depth key informant interviews. Nvivo software was utilized to extract themes. Results: The study revealed the process of operationalization of the institutional vision to produce competent and culturally sensitive physicians. The delay in the establishment of the Faculty of Arts and Sciences, which was expected to take a lead role in the delivery of a broad-based education, led to the development of an innovative HASS course in the medical curriculum. The study also identified availability of faculty and resistance from students as challenges faced in the implementation and evolution of HASS. Conclusions: The description of the journey and viability of integration of HASS into the medical curriculum offers a model to medical colleges seeking ways to produce socially sensitive physicians. PMID:27648038
NASA Astrophysics Data System (ADS)
Thakore, B.; Frierson, T.; Coderre, K.; Lukaszczyk, A.; Karl, A.
2009-04-01
This paper outlines the response of students and young space professionals on the occasion of the 50th Anniversary of the first artificial satellite and the 40th anniversary of the Outer Space Treaty. The contribution has been coordinated by the Space Generation Advisory Council (SGAC) in support of the United Nations Programme on Space Applications. It follows consultation of the SGAC community through a series of meetings, online discussions and online surveys. The first two online surveys collected over 750 different visions from the international community, totaling approximately 276 youth from over 28 countries and builds on previous SGAC policy contributions. A summary of these results was presented as the top 10 visions of today's youth as an invited input to world space leaders gathered at the Symposium on "The future of space exploration: Solutions to earthly problems" held in Boston, USA from April 12-14 2007 and at the United Nations Committee on the Peaceful Uses of Outer Space in May 2007. These key visions suggested the enhancement for humanity's reach beyond this planet - both physically and intellectual. These key visions were themed into three main categories: • Improvement of Human Survival Probability - sustained exploration to become a multi- planet species, humans to Mars, new treaty structures to ensure a secure space environment, etc • Improvement of Human Quality of Life and the Environment - new political systems or astrocracy, benefits of tele-medicine, tele-education, and commercialization of space, new energy and resources: space solar power, etc. • Improvement of Human Knowledge and Understanding - complete survey of extinct and extant life forms, use of space data for advanced environmental monitoring, etc. This paper will summarize the outcomes from a further online survey and represent key recommendations given by international youth advocates on further steps that could be taken by space agencies and organizations to make the top 10 visions a reality. In turn the online discussions that are used to engage the youth audience would be recorded and would help to reflect the confidence of the younger generation in these visions. The categories listed above would also be investigated further from the technology, policy and ethical aspects. Recent activities in development to further disseminate the necessary connections between using of space technology for solving global challenges is discussed.
Stephens, Martin L.; Barrow, Craig; Andersen, Melvin E.; Boekelheide, Kim; Carmichael, Paul L.; Holsapple, Michael P.; Lafranconi, Mark
2012-01-01
The U.S. National Research Council (NRC) report on “Toxicity Testing in the 21st century” calls for a fundamental shift in the way that chemicals are tested for human health effects and evaluated in risk assessments. The new approach would move toward in vitro methods, typically using human cells in a high-throughput context. The in vitro methods would be designed to detect significant perturbations to “toxicity pathways,” i.e., key biological pathways that, when sufficiently perturbed, lead to adverse health outcomes. To explore progress on the report’s implementation, the Human Toxicology Project Consortium hosted a workshop on 9–10 November 2010 in Washington, DC. The Consortium is a coalition of several corporations, a research institute, and a non-governmental organization dedicated to accelerating the implementation of 21st-century Toxicology as aligned with the NRC vision. The goal of the workshop was to identify practical and scientific ways to accelerate implementation of the NRC vision. The workshop format consisted of plenary presentations, breakout group discussions, and concluding commentaries. The program faculty was drawn from industry, academia, government, and public interest organizations. Most presentations summarized ongoing efforts to modernize toxicology testing and approaches, each with some overlap with the NRC vision. In light of these efforts, the workshop identified recommendations for accelerating implementation of the NRC vision, including greater strategic coordination and planning across projects (facilitated by a steering group), the development of projects that test the proof of concept for implementation of the NRC vision, and greater outreach and communication across stakeholder communities. PMID:21948868
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1990-01-01
Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.
Advanced integrated enhanced vision systems
NASA Astrophysics Data System (ADS)
Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha
2003-09-01
In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.
Attention During Natural Vision Warps Semantic Representation Across the Human Brain
Çukur, Tolga; Nishimoto, Shinji; Huth, Alexander G.; Gallant, Jack L.
2013-01-01
Little is known about how attention changes the cortical representation of sensory information in humans. Based on neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue we used functional MRI (fMRI) to measure how semantic representation changes when searching for different object categories in natural movies. We find that many voxels across occipito-temporal and fronto-parietal cortex shift their tuning toward the attended category. These tuning shifts expand the representation of the attended category and of semantically-related but unattended categories, and compress the representation of categories semantically-dissimilar to the target. Attentional warping of semantic representation occurs even when the attended category is not present in the movie, thus the effect is not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision. PMID:23603707
Visual wetness perception based on image color statistics.
Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya
2017-05-01
Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.
Contrast, size, and orientation-invariant target detection in infrared imagery
NASA Astrophysics Data System (ADS)
Zhou, Yi-Tong; Crawshaw, Richard D.
1991-08-01
Automatic target detection in IR imagery is a very difficult task due to variations in target brightness, shape, size, and orientation. In this paper, the authors present a contrast, size, and orientation invariant algorithm based on Gabor functions for detecting targets from a single IR image frame. The algorithms consists of three steps. First, it locates potential targets by using low-resolution Gabor functions which resist noise and background clutter effects, then, it removes false targets and eliminates redundant target points based on a similarity measure. These two steps mimic human vision processing but are different from Zeevi's Foveating Vision System. Finally, it uses both low- and high-resolution Gabor functions to verify target existence. This algorithm has been successfully tested on several IR images that contain multiple examples of military vehicles with different size and brightness in various background scenes and orientations.
Becker, Brian C.; Yang, Sungwook; MacLachlan, Robert A.; Riviere, Cameron N.
2012-01-01
Injecting clot-busting drugs such as t-PA into tiny vessels thinner than a human hair in the eye is a challenging procedure, especially since the vessels lie directly on top of the delicate and easily damaged retina. Various robotic aids have been proposed with the goal of increasing safety by removing tremor and increasing precision with motion scaling. We have developed a fully handheld micromanipulator, Micron, that has demonstrated reduced tremor when cannulating porcine retinal veins in an “open sky” scenario. In this paper, we present work towards handheld robotic cannulation with the goal of vision-based virtual fixtures guiding the tip of the cannula to the vessel. Using a realistic eyeball phantom, we address sclerotomy constraints, eye movement, and non-planar retina. Preliminary results indicate a handheld micromanipulator aided by visual control is a promising solution to retinal vessel occlusion. PMID:24649479
ERIC Educational Resources Information Center
Savelava, Sofia; Savelau, Dmitry; Cary, Marina Bakhnova
2010-01-01
The Earth Charter represents the philosophy and ethics necessary to create a new period of human civilization. Understanding and adoption of this new vision is the most important mission of education for sustainable development (ESD). This article argues that for successful implementation of ESD principles at school, the school education system…
Proceedings of the international conference on cybernetics and societ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1985-01-01
This book presents the papers given at a conference on artificial intelligence, expert systems and knowledge bases. Topics considered at the conference included automating expert system development, modeling expert systems, causal maps, data covariances, robot vision, image processing, multiprocessors, parallel processing, VLSI structures, man-machine systems, human factors engineering, cognitive decision analysis, natural language, computerized control systems, and cybernetics.
Re-Conceptualization of Scientific Literacy in South Korea for the 21st Century
ERIC Educational Resources Information Center
Choi, Kyunghee; Lee, Hyunju; Shin, Namsoo; Kim, Sung-Won; Krajcik, Joseph
2011-01-01
As the context of human life expands from personal to global, a new vision of scientific literacy is needed. Based on a synthesis of the literature and the findings of an online survey of South Korean and US secondary science teachers, we developed a framework for scientific literacy for South Korea that includes five dimensions: content…
Schools of Hope: Developing Mind and Character in Today's Youth. The Jossey-Bass Education Series.
ERIC Educational Resources Information Center
Heath, Douglas H.
"Schools of Hope" seeks to resurrect the historic, liberally educating vision of arete, or all-around human excellence, as the only proper and realistic goal for preparing today's students for their future. It is based on the assertion that only students who can think creatively will grow, mature mentally, and adapt. From this perspective, schools…
Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation
Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O.; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M.; Vitiello, Nicola
2016-01-01
Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements. PMID:26861333
Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation.
Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M; Vitiello, Nicola
2016-02-05
Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master-slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator's hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers' hands movements.
NASA Astrophysics Data System (ADS)
Putnam, Nicole Marie
In order to study the limits of spatial vision in normal human subjects, it is important to look at and near the fovea. The fovea is the specialized part of the retina, the light-sensitive multi-layered neural tissue that lines the inner surface of the human eye, where the cone photoreceptors are smallest (approximately 2.5 microns or 0.5 arcmin) and cone density reaches a peak. In addition, there is a 1:1 mapping from the photoreceptors to the brain in this central region of the retina. As a result, the best spatial sampling is achieved in the fovea and it is the retinal location used for acuity and spatial vision tasks. However, vision is typically limited by the blur induced by the normal optics of the eye and clinical tests of foveal vision and foveal imaging are both limited due to the blur. As a result, it is unclear what the perceptual benefit of extremely high cone density is. Cutting-edge imaging technology, specifically Adaptive Optics Scanning Laser Ophthalmoscopy (AOSLO), can be utilized to remove this blur, zoom in, and as a result visualize individual cone photoreceptors throughout the central fovea. This imaging combined with simultaneous image stabilization and targeted stimulus delivery expands our understanding of both the anatomical structure of the fovea on a microscopic scale and the placement of stimuli within this retinal area during visual tasks. The final step is to investigate the role of temporal variables in spatial vision tasks since the eye is in constant motion even during steady fixation. In order to learn more about the fovea, it becomes important to study the effect of this motion on spatial vision tasks. This dissertation steps through many of these considerations, starting with a model of the foveal cone mosaic imaged with AOSLO. We then use this high resolution imaging to compare anatomical and functional markers of the center of the normal human fovea. Finally, we investigate the role of natural and manipulated fixational eye movements in foveal vision, specifically looking at a motion detection task, contrast sensitivity, and image fading.
Alternatives for Future U.S. Space-Launch Capabilities
2006-10-01
directive issued on January 14, 2004—called the new Vision for Space Exploration (VSE)—set out goals for future exploration of the solar system using...of the solar system using manned spacecraft. Among those goals was a proposal to return humans to the moon no later than 2020. The ultimate goal...U.S. launch capacity exclude the Sea Launch system operated by Boeing in partnership with RSC- Energia (based in Moscow), Kvaerner ASA (based in Oslo
Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.
Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer
2015-11-01
Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.
Perceptual organization in computer vision - A review and a proposal for a classificatory structure
NASA Technical Reports Server (NTRS)
Sarkar, Sudeep; Boyer, Kim L.
1993-01-01
The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.
Image-plane processing of visual information
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.
1984-01-01
Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.
EnVision+, a new dextran polymer-based signal enhancement technique for in situ hybridization (ISH).
Wiedorn, K H; Goldmann, T; Henne, C; Kühl, H; Vollmer, E
2001-09-01
Seventy paraffin-embedded cervical biopsy specimens and condylomata were tested for the presence of human papillomavirus (HPV) by conventional in situ hybridization (ISH) and ISH with subsequent signal amplification. Signal amplification was performed either by a commercial biotinyl-tyramide-based detection system [GenPoint (GP)] or by the novel two-layer dextran polymer visualization system EnVision+ (EV), in which both EV-horseradish peroxidase (EV-HRP) and EV-alkaline phosphatase (EV-AP) were applied. We could demonstrate for the first time, that EV in combination with preceding ISH results in a considerable increase in signal intensity and sensitivity without loss of specificity compared to conventional ISH. Compared to GP, EV revealed a somewhat lower sensitivity, as measured by determination of the integrated optical density (IOD) of the positively stained cells. However, EV is easier to perform, requires a shorter assay time, and does not raise the background problems that may be encountered with biotinyl-tyramide-based amplification systems. (J Histochem Cytochem 49:1067-1071, 2001)
An analog VLSI chip emulating polarization vision of Octopus retina.
Momeni, Massoud; Titus, Albert H
2006-01-01
Biological systems provide a wealth of information which form the basis for human-made artificial systems. In this work, the visual system of Octopus is investigated and its polarization sensitivity mimicked. While in actual Octopus retina, polarization vision is mainly based on the orthogonal arrangement of its photoreceptors, our implementation uses a birefringent micropolarizer made of YVO4 and mounted on a CMOS chip with neuromorphic circuitry to process linearly polarized light. Arranged in an 8 x 5 array with two photodiodes per pixel, each consuming typically 10 microW, this circuitry mimics both the functionality of individual Octopus retina cells by computing the state of polarization and the interconnection of these cells through a bias-controllable resistive network.
NASA Technical Reports Server (NTRS)
Lewandowski, Leon; Struckman, Keith
1994-01-01
Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.
Development and evaluation of vision rehabilitation devices.
Luo, Gang; Peli, Eli
2011-01-01
We have developed a range of vision rehabilitation devices and techniques for people with impaired vision due to either central vision loss or severely restricted peripheral visual field. We have conducted evaluation studies with patients to test the utilities of these techniques in an effort to document their advantages as well as their limitations. Here we describe our work on a visual field expander based on a head mounted display (HMD) for tunnel vision, a vision enhancement device for central vision loss, and a frequency domain JPEG/MPEG based image enhancement technique. All the evaluation studies included visual search paradigms that are suitable for conducting indoor controllable experiments.
Knowledge-based vision and simple visual machines.
Cliff, D; Noble, J
1997-01-01
The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684
An Operationally Based Vision Assessment Simulator for Domes
NASA Technical Reports Server (NTRS)
Archdeacon, John; Gaska, James; Timoner, Samson
2012-01-01
The Operational Based Vision Assessment (OBVA) simulator was designed and built by NASA and the United States Air Force (USAF) to provide the Air Force School of Aerospace Medicine (USAFSAM) with a scientific testing laboratory to study human vision and testing standards in an operationally relevant environment. This paper describes the general design objectives and implementation characteristics of the simulator visual system being created to meet these requirements. A key design objective for the OBVA research simulator is to develop a real-time computer image generator (IG) and display subsystem that can display and update at 120 frame s per second (design target), or at a minimum, 60 frames per second, with minimal transport delay using commercial off-the-shelf (COTS) technology. There are three key parts of the OBVA simulator that are described in this paper: i) the real-time computer image generator, ii) the various COTS technology used to construct the simulator, and iii) the spherical dome display and real-time distortion correction subsystem. We describe the various issues, possible COTS solutions, and remaining problem areas identified by NASA and the USAF while designing and building the simulator for future vision research. We also describe the critically important relationship of the physical display components including distortion correction for the dome consistent with an objective of minimizing latency in the system. The performance of the automatic calibration system used in the dome is also described. Various recommendations for possible future implementations shall also be discussed.
Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.
2017-01-01
Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95–98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management. PMID:28338047
NASA Astrophysics Data System (ADS)
Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.
2017-03-01
Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95-98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management.
Ruiz de Chávez-Guerrero, Manuel Hugo
2014-01-01
Bioethics in Mexico has a history that reveals the vision and ethical commitment of iconic characters in the fields of health sciences and humanities, leading to the creation of the National Bioethics Commission responsible for promoting a bioethics culture in Mexico. Its development and consolidation from the higher perspective of humanism had the aim to preserve health, life and its environment, while at the same time the bases of ethics and professional practice from different perspectives have been the building blocks of medical practice.
Recognizing Materials using Perceptually Inspired Features
Sharan, Lavanya; Liu, Ce; Rosenholtz, Ruth; Adelson, Edward H.
2013-01-01
Our world consists not only of objects and scenes but also of materials of various kinds. Being able to recognize the materials that surround us (e.g., plastic, glass, concrete) is important for humans as well as for computer vision systems. Unfortunately, materials have received little attention in the visual recognition literature, and very few computer vision systems have been designed specifically to recognize materials. In this paper, we present a system for recognizing material categories from single images. We propose a set of low and mid-level image features that are based on studies of human material recognition, and we combine these features using an SVM classifier. Our system outperforms a state-of-the-art system [Varma and Zisserman, 2009] on a challenging database of real-world material categories [Sharan et al., 2009]. When the performance of our system is compared directly to that of human observers, humans outperform our system quite easily. However, when we account for the local nature of our image features and the surface properties they measure (e.g., color, texture, local shape), our system rivals human performance. We suggest that future progress in material recognition will come from: (1) a deeper understanding of the role of non-local surface properties (e.g., extended highlights, object identity); and (2) efforts to model such non-local surface properties in images. PMID:23914070
Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hively, Lee M; Sheldon, Frederick T
The cybersecurity landscape consists of an ad hoc patchwork of solutions. Optimal cybersecurity is difficult for various reasons: complexity, immense data and processing requirements, resource-agnostic cloud computing, practical time-space-energy constraints, inherent flaws in 'Maginot Line' defenses, and the growing number and sophistication of cyberattacks. This article defines the high-priority problems and examines the potential solution space. In that space, achieving scalable trustworthy computing and communications is possible through real-time knowledge-based decisions about cyber trust. This vision is based on the human-physiology-immunity metaphor and the human brain's ability to extract knowledge from data and information. The article outlines future steps towardmore » scalable trustworthy systems requiring a long-term commitment to solve the well-known challenges.« less
Skeleton-based human action recognition using multiple sequence alignment
NASA Astrophysics Data System (ADS)
Ding, Wenwen; Liu, Kai; Cheng, Fei; Zhang, Jin; Li, YunSong
2015-05-01
Human action recognition and analysis is an active research topic in computer vision for many years. This paper presents a method to represent human actions based on trajectories consisting of 3D joint positions. This method first decompose action into a sequence of meaningful atomic actions (actionlets), and then label actionlets with English alphabets according to the Davies-Bouldin index value. Therefore, an action can be represented using a sequence of actionlet symbols, which will preserve the temporal order of occurrence of each of the actionlets. Finally, we employ sequence comparison to classify multiple actions through using string matching algorithms (Needleman-Wunsch). The effectiveness of the proposed method is evaluated on datasets captured by commodity depth cameras. Experiments of the proposed method on three challenging 3D action datasets show promising results.
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-27
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.
Exploiting spatio-temporal characteristics of human vision for mobile video applications
NASA Astrophysics Data System (ADS)
Jillani, Rashad; Kalva, Hari
2008-08-01
Video applications on handheld devices such as smart phones pose a significant challenge to achieve high quality user experience. Recent advances in processor and wireless networking technology are producing a new class of multimedia applications (e.g. video streaming) for mobile handheld devices. These devices are light weight and have modest sizes, and therefore very limited resources - lower processing power, smaller display resolution, lesser memory, and limited battery life as compared to desktop and laptop systems. Multimedia applications on the other hand have extensive processing requirements which make the mobile devices extremely resource hungry. In addition, the device specific properties (e.g. display screen) significantly influence the human perception of multimedia quality. In this paper we propose a saliency based framework that exploits the structure in content creation as well as the human vision system to find the salient points in the incoming bitstream and adapt it according to the target device, thus improving the quality of new adapted area around salient points. Our experimental results indicate that the adaptation process that is cognizant of video content and user preferences can produce better perceptual quality video for mobile devices. Furthermore, we demonstrated how such a framework can affect user experience on a handheld device.
Characterization and Operation of Liquid Crystal Adaptive Optics Phoropter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awwal, A; Bauman, B; Gavel, D
2003-02-05
Adaptive optics (AO), a mature technology developed for astronomy to compensate for the effects of atmospheric turbulence, can also be used to correct the aberrations of the eye. The classic phoropter is used by ophthalmologists and optometrists to estimate and correct the lower-order aberrations of the eye, defocus and astigmatism, in order to derive a vision correction prescription for their patients. An adaptive optics phoropter measures and corrects the aberrations in the human eye using adaptive optics techniques, which are capable of dealing with both the standard low-order aberrations and higher-order aberrations, including coma and spherical aberration. High-order aberrations havemore » been shown to degrade visual performance for clinical subjects in initial investigations. An adaptive optics phoropter has been designed and constructed based on a Shack-Hartmann sensor to measure the aberrations of the eye, and a liquid crystal spatial light modulator to compensate for them. This system should produce near diffraction-limited optical image quality at the retina, which will enable investigation of the psychophysical limits of human vision. This paper describes the characterization and operation of the AO phoropter with results from human subject testing.« less
2011-11-01
RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica
Sensorimotor Interactions in the Haptic Perception of Virtual Objects
1997-01-01
the human user. 2 Compared to our understanding of vision and audition , our knowledge of the human haptic perception is very limited. Many basic...modalities such as vision and audition on haptic perception of viscosity or mass, for example. 116 Some preliminary work has already been done in this...string[3]; *posx="x" *forf="f’ *velv="v" * acca ="a" trial[64]; resp[64]; /* random number */ /* trial number */ /* index */ /* array holding stim
NASA Technical Reports Server (NTRS)
Juday, Richard D.; Loshin, David S.
1989-01-01
Image coordinate transformations are investigated for possible use in a low vision aid for human patients. These patients typically have field defects with localized retinal dysfunction predominately central (age related maculopathy) or peripheral (retinitis pigmentosa). Previously simple eccentricity-only remappings which do not maintain conformality were shown. Initial attempts on developing images which hold quasi-conformality after remapping are presented. Although the quasi-conformal images may have less local distortion, there are discontinuities in the image which may counterindicate this type of transformation for the low vision application.
Are dogs red-green colour blind?
Siniscalchi, Marcello; d'Ingeo, Serenella; Fornelli, Serena; Quaranta, Angelo
2017-11-01
Neurobiological and molecular studies suggest a dichromatic colour vision in canine species, which appears to be similar to that of human red-green colour blindness. Here, we show that dogs exhibit a behavioural response similar to that of red-green blind human subjects when tested with a modified version of a test commonly used for the diagnosis of human deuteranopia (i.e. the Ishihara's test). Besides contributing to increasing the knowledge about the perceptual ability of dogs, the present work describes for the first time, to our knowledge, a method that can be used to assess colour vision in the animal kingdom.
Visions of human futures in space and SETI
NASA Astrophysics Data System (ADS)
Wright, Jason T.; Oman-Reagan, Michael P.
2018-04-01
We discuss how visions for the futures of humanity in space and SETI are intertwined, and are shaped by prior work in the fields and by science fiction. This appears in the language used in the fields, and in the sometimes implicit assumptions made in discussions of them. We give examples from articulations of the so-called Fermi Paradox, discussions of the settlement of the Solar System (in the near future) and the Galaxy (in the far future), and METI. We argue that science fiction, especially the campy variety, is a significant contributor to the `giggle factor' that hinders serious discussion and funding for SETI and Solar System settlement projects. We argue that humanity's long-term future in space will be shaped by our short-term visions for who goes there and how. Because of the way they entered the fields, we recommend avoiding the term `colony' and its cognates when discussing the settlement of space, as well as other terms with similar pedigrees. We offer examples of science fiction and other writing that broaden and challenge our visions of human futures in space and SETI. In an appendix, we use an analogy with the well-funded and relatively uncontroversial searches for the dark matter particle to argue that SETI's lack of funding in the national science portfolio is primarily a problem of perception, not inherent merit.
Limits of colour vision in dim light.
Kelber, Almut; Lind, Olle
2010-09-01
Humans and most vertebrates have duplex retinae with multiple cone types for colour vision in bright light, and one single rod type for achromatic vision in dim light. Instead of comparing signals from multiple spectral types of photoreceptors, such species use one highly sensitive receptor type thus improving the signal-to-noise ratio at night. However, the nocturnal hawkmoth Deilephila elpenor, the nocturnal bee Xylocopa tranquebarica and the nocturnal gecko Tarentola chazaliae can discriminate colours at extremely dim light intensities. To be able to do so, they sacrifice spatial and temporal resolution in favour of colour vision. We review what is known about colour vision in dim light, and compare colour vision thresholds with the optical sensitivity of the photoreceptors in selected animal species with lens and compound eyes. © 2010 The Authors, Ophthalmic and Physiological Optics © 2010 The College of Optometrists.
Helicopter flights with night-vision goggles: Human factors aspects
NASA Technical Reports Server (NTRS)
Brickner, Michael S.
1989-01-01
Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques.
Semantic shape similarity-based contour tracking evaluation
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqin; Luo, Wenhan; Zhao, Li; Li, Wei; Hu, Weiming
2011-10-01
One major problem of contour-based tracking is how to evaluate the accuracy of tracking results due to nonrigid and deformative properties of contours. We propose a shape context-based evaluation measure that considers the semantic shape similarity between the tracked contour and ground-truth contour. In addition, a pyramid match kernel is introduced for shape histogram matching, which can effectively deal with the contours with different scales. Experimental results demonstrate, compared to two start-of-art evaluation measures, our measure effectively captures the local shape information and thus is more consistent with human vision.
Vision 2040: Evolving the Successful International Space University
NASA Technical Reports Server (NTRS)
Martin, Gary; Marti, Izan Peris; Tlustos, Reinhard; Lorente, Arnau Pons; Panerati, Jocopo; Mensink, Wendy; Sorkhabi, Elbruz; Garcia, Oriol Gasquez; Musilova, Michaela; Pearson, Thomas
2015-01-01
Space exploration has always been full of inspiration, innovation, and creativity, with the promise of expanding human civilization beyond Earth. The space sector is currently experiencing rapid change as disruptive technologies, grassroots programs, and new commercial initiatives have reshaped long-standing methods of operation. Throughout the last 28 years, the International Space University (ISU) has been a leading institution for space education, forming international partnerships, and encouraging entrepreneurship in its over 4,000 alumni. In this report, our Vision 2040 team projected the next 25 years of space exploration and analyzed how ISU could remain a leading institution in the rapidly changing industry. Vision 2040 considered five important future scenarios for the space sector: real-time Earth applications, orbital stations, lunar bases, lunar and asteroid mining, and a human presence on Mars. We identified the signals of disruptive change within these scenarios, including underlying driving forces and potential challenges, and derived a set of skills that will be required in the future space industry. Using these skills as a starting point, we proposed strategies in five areas of focus for ISU: the future of the Space Studies Program (SSP), analog missions, outreach, alumni, and startups. We concluded that ISU could become not just an increasingly innovative educational institution, but one that acts as an international organization that drives space commercialization, exploration, innovation, and cooperation.
Early vision and focal attention
NASA Astrophysics Data System (ADS)
Julesz, Bela
1991-07-01
At the thirty-year anniversary of the introduction of the technique of computer-generated random-dot stereograms and random-dot cinematograms into psychology, the impact of the technique on brain research and on the study of artificial intelligence is reviewed. The main finding-that stereoscopic depth perception (stereopsis), motion perception, and preattentive texture discrimination are basically bottom-up processes, which occur without the help of the top-down processes of cognition and semantic memory-greatly simplifies the study of these processes of early vision and permits the linking of human perception with monkey neurophysiology. Particularly interesting are the unexpected findings that stereopsis (assumed to be local) is a global process, while texture discrimination (assumed to be a global process, governed by statistics) is local, based on some conspicuous local features (textons). It is shown that the top-down process of "shape (depth) from shading" does not affect stereopsis, and some of the models of machine vision are evaluated. The asymmetry effect of human texture discrimination is discussed, together with recent nonlinear spatial filter models and a novel extension of the texton theory that can cope with the asymmetry problem. This didactic review attempts to introduce the physicist to the field of psychobiology and its problems-including metascientific problems of brain research, problems of scientific creativity, the state of artificial intelligence research (including connectionist neural networks) aimed at modeling brain activity, and the fundamental role of focal attention in mental events.
Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI
Serrano, Miguel Ángel; Gómez-Romero, Juan; Patricio, Miguel Ángel; García, Jesús; Molina, José Manuel
2012-01-01
Recent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI) environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors' knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP) application for the elaboration of live market researches.
Contextualising and Analysing Planetary Rover Image Products through the Web-Based PRoGIS
NASA Astrophysics Data System (ADS)
Morley, Jeremy; Sprinks, James; Muller, Jan-Peter; Tao, Yu; Paar, Gerhard; Huber, Ben; Bauer, Arnold; Willner, Konrad; Traxler, Christoph; Garov, Andrey; Karachevtseva, Irina
2014-05-01
The international planetary science community has launched, landed and operated dozens of human and robotic missions to the planets and the Moon. They have collected various surface imagery that has only been partially utilized for further scientific purposes. The FP7 project PRoViDE (Planetary Robotics Vision Data Exploitation) is assembling a major portion of the imaging data gathered so far from planetary surface missions into a unique database, bringing them into a spatial context and providing access to a complete set of 3D vision products. Processing is complemented by a multi-resolution visualization engine that combines various levels of detail for a seamless and immersive real-time access to dynamically rendered 3D scenes. PRoViDE aims to (1) complete relevant 3D vision processing of planetary surface missions, such as Surveyor, Viking, Pathfinder, MER, MSL, Phoenix, Huygens, and Lunar ground-level imagery from Apollo, Russian Lunokhod and selected Luna missions, (2) provide highest resolution & accuracy remote sensing (orbital) vision data processing results for these sites to embed the robotic imagery and its products into spatial planetary context, (3) collect 3D Vision processing and remote sensing products within a single coherent spatial data base, (4) realise seamless fusion between orbital and ground vision data, (5) demonstrate the potential of planetary surface vision data by maximising image quality visualisation in 3D publishing platform, (6) collect and formulate use cases for novel scientific application scenarios exploiting the newly introduced spatial relationships and presentation, (7) demonstrate the concepts for MSL, (9) realize on-line dissemination of key data & its presentation by a web-based GIS and rendering tool named PRoGIS (Planetary Robotics GIS). PRoGIS is designed to give access to rover image archives in geographical context, using projected image view cones, obtained from existing meta-data and updated according to processing results, as a means to interact with and explore the archive. However PRoGIS is more than a source data explorer. It is linked to the PRoVIP (Planetary Robotics Vision Image Processing) system which includes photogrammetric processing tools to extract terrain models, compose panoramas, and explore and exploit multi-view stereo (where features on the surface have been imaged from different rover stops). We have started with the Opportunity MER rover as our test mission but the system is being designed to be multi-mission, taking advantage in particular of UCL MSSL's PDS mirror, and we intend to at least deal with both MER rovers and MSL. For the period of ProViDE until end of 2015 the further intent is to handle lunar and other Martian rover & descent camera data. The presentation discusses the challenges of integrating rover and orbital derived data into a single geographical framework, especially reconstructing view cones; our human-computer interaction intentions in creating an interface to the rover data that is accessible to planetary scientists; how we handle multi-mission data in the database; and a demonstration of the resulting system & its processing capabilities. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 PRoViDE.
JPEG2000 encoding with perceptual distortion control.
Liu, Zhen; Karam, Lina J; Watson, Andrew B
2006-07-01
In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.
Proceedings of the 1986 IEEE international conference on systems, man and cybernetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1986-01-01
This book presents the papers given at a conference on man-machine systems. Topics considered at the conference included neural model-based cognitive theory and engineering, user interfaces, adaptive and learning systems, human interaction with robotics, decision making, the testing and evaluation of expert systems, software development, international conflict resolution, intelligent interfaces, automation in man-machine system design aiding, knowledge acquisition in expert systems, advanced architectures for artificial intelligence, pattern recognition, knowledge bases, and machine vision.
A deblocking algorithm based on color psychology for display quality enhancement
NASA Astrophysics Data System (ADS)
Yeh, Chia-Hung; Tseng, Wen-Yu; Huang, Kai-Lin
2012-12-01
This article proposes a post-processing deblocking filter to reduce blocking effects. The proposed algorithm detects blocking effects by fusing the results of Sobel edge detector and wavelet-based edge detector. The filtering stage provides four filter modes to eliminate blocking effects at different color regions according to human color vision and color psychology analysis. Experimental results show that the proposed algorithm has better subjective and objective qualities for H.264/AVC reconstructed videos when compared to several existing methods.
Natural Tasking of Robots Based on Human Interaction Cues
2005-06-01
MIT. • Matthew Marjanovic , researcher, ITA Software. • Brian Scasselatti, Assistant Professor of Computer Science, Yale. • Matthew Williamson...2004. 25 [74] Charlie C. Kemp. Shoes as a platform for vision. 7th IEEE International Symposium on Wearable Computers, 2004. [75] Matthew Marjanovic ...meso: Simulated muscles for a humanoid robot. Presentation for Humanoid Robotics Group, MIT AI Lab, August 2001. [76] Matthew J. Marjanovic . Teaching
Visually induced self-motion sensation adapts rapidly to left-right reversal of vision
NASA Technical Reports Server (NTRS)
Oman, C. M.; Bock, O. L.
1981-01-01
Three experiments were conducted using 15 adult volunteers with no overt oculomotor or vestibular disorders. In all experiments, left-right vision reversal was achieved using prism goggles, which permitted a binocular field of vision subtending approximately 45 deg horizontally and 28 deg vertically. In all experiments, circularvection (CV) was tested before and immediately after a period of exposure to reversed vision. After one to three hours of active movement while wearing vision-reversing goggles, 10 of 15 (stationary) human subjects viewing a moving stripe display experienced a self-rotation illusion in the same direction as seen stripe motion, rather than in the opposite (normal) direction, demonstrating that the central neural pathways that process visual self-rotation cues can undergo rapid adaptive modification.
Pilot response to peripheral vision cues during instrument flying tasks.
DOT National Transportation Integrated Search
1968-02-01
In an attempt to more closely associate the visual aspects of instrument flying with that of contact flight, a study was made of human response to peripheral vision cues relating to aircraft roll attitude. Pilots, ranging from 52 to 12,000 flying hou...
A vision and strategy for the virtual physiological human in 2010 and beyond.
Hunter, Peter; Coveney, Peter V; de Bono, Bernard; Diaz, Vanessa; Fenner, John; Frangi, Alejandro F; Harris, Peter; Hose, Rod; Kohl, Peter; Lawford, Pat; McCormack, Keith; Mendes, Miriam; Omholt, Stig; Quarteroni, Alfio; Skår, John; Tegner, Jesper; Randall Thomas, S; Tollis, Ioannis; Tsamardinos, Ioannis; van Beek, Johannes H G M; Viceconti, Marco
2010-06-13
European funding under framework 7 (FP7) for the virtual physiological human (VPH) project has been in place now for nearly 2 years. The VPH network of excellence (NoE) is helping in the development of common standards, open-source software, freely accessible data and model repositories, and various training and dissemination activities for the project. It is also helping to coordinate the many clinically targeted projects that have been funded under the FP7 calls. An initial vision for the VPH was defined by framework 6 strategy for a European physiome (STEP) project in 2006. It is now time to assess the accomplishments of the last 2 years and update the STEP vision for the VPH. We consider the biomedical science, healthcare and information and communications technology challenges facing the project and we propose the VPH Institute as a means of sustaining the vision of VPH beyond the time frame of the NoE.
A vision and strategy for the virtual physiological human in 2010 and beyond
Hunter, Peter; Coveney, Peter V.; de Bono, Bernard; Diaz, Vanessa; Fenner, John; Frangi, Alejandro F.; Harris, Peter; Hose, Rod; Kohl, Peter; Lawford, Pat; McCormack, Keith; Mendes, Miriam; Omholt, Stig; Quarteroni, Alfio; Skår, John; Tegner, Jesper; Randall Thomas, S.; Tollis, Ioannis; Tsamardinos, Ioannis; van Beek, Johannes H. G. M.; Viceconti, Marco
2010-01-01
European funding under framework 7 (FP7) for the virtual physiological human (VPH) project has been in place now for nearly 2 years. The VPH network of excellence (NoE) is helping in the development of common standards, open-source software, freely accessible data and model repositories, and various training and dissemination activities for the project. It is also helping to coordinate the many clinically targeted projects that have been funded under the FP7 calls. An initial vision for the VPH was defined by framework 6 strategy for a European physiome (STEP) project in 2006. It is now time to assess the accomplishments of the last 2 years and update the STEP vision for the VPH. We consider the biomedical science, healthcare and information and communications technology challenges facing the project and we propose the VPH Institute as a means of sustaining the vision of VPH beyond the time frame of the NoE. PMID:20439264
Operator-coached machine vision for space telerobotics
NASA Technical Reports Server (NTRS)
Bon, Bruce; Wilcox, Brian; Litwin, Todd; Gennery, Donald B.
1991-01-01
A prototype system for interactive object modeling has been developed and tested. The goal of this effort has been to create a system which would demonstrate the feasibility of high interactive operator-coached machine vision in a realistic task environment, and to provide a testbed for experimentation with various modes of operator interaction. The purpose for such a system is to use human perception where machine vision is difficult, i.e., to segment the scene into objects and to designate their features, and to use machine vision to overcome limitations of human perception, i.e., for accurate measurement of object geometry. The system captures and displays video images from a number of cameras, allows the operator to designate a polyhedral object one edge at a time by moving a 3-D cursor within these images, performs a least-squares fit of the designated edges to edge data detected with a modified Sobel operator, and combines the edges thus detected to form a wire-frame object model that matches the Sobel data.
A model of the hierarchy of behaviour, cognition, and consciousness.
Toates, Frederick
2006-03-01
Processes comparable in important respects to those underlying human conscious and non-conscious processing can be identified in a range of species and it is argued that these reflect evolutionary precursors of the human processes. A distinction is drawn between two types of processing: (1) stimulus-based and (2) higher-order. For 'higher-order,' in humans the operations of processing are themselves associated with conscious awareness. Conscious awareness sets the context for stimulus-based processing and its end-point is accessible to conscious awareness. However, the mechanics of the translation between stimulus and response proceeds without conscious control. The paper argues that higher-order processing is an evolutionary addition to stimulus-based processing. The model's value is shown for gaining insight into a range of phenomena and their link with consciousness. These include brain damage, learning, memory, development, vision, emotion, motor control, reasoning, the voluntary versus involuntary debate, and mental disorder.
Microscope self-calibration based on micro laser line imaging and soft computing algorithms
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.
2018-06-01
A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.
NASA Technical Reports Server (NTRS)
Goldin, Daniel S.
1994-01-01
Under the administration of Dan Goldin's leadership, NASA is reinventing itself. In the process, the agency is also searching for a vision to define its role, both as a US Government agency and as a leading force in humanity's exploration of space. An adaption of Goldin's speech to the American Geophysical Union on 26 May 1994 in which he proposes one possible unifying vision is presented.
Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.
Sanchez, Yerly; Pinzon, David; Zheng, Bin
2017-10-01
To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.
Physiological and ecological implications of ocean deoxygenation for vision in marine organisms
NASA Astrophysics Data System (ADS)
McCormick, Lillian R.; Levin, Lisa A.
2017-08-01
Climate change has induced ocean deoxygenation and exacerbated eutrophication-driven hypoxia in recent decades, affecting the physiology, behaviour and ecology of marine organisms. The high oxygen demand of visual tissues and the known inhibitory effects of hypoxia on human vision raise the questions if and how ocean deoxygenation alters vision in marine organisms. This is particularly important given the rapid loss of oxygen and strong vertical gradients in oxygen concentration in many areas of the ocean. This review evaluates the potential effects of low oxygen (hypoxia) on visual function in marine animals and their implications for marine biota under current and future ocean deoxygenation based on evidence from terrestrial and a few marine organisms. Evolutionary history shows radiation of eye designs during a period of increasing ocean oxygenation. Physiological effects of hypoxia on photoreceptor function and light sensitivity, in combination with morphological changes that may occur throughout ontogeny, have the potential to alter visual behaviour and, subsequently, the ecology of marine organisms, particularly for fish, cephalopods and arthropods with `fast' vision. Visual responses to hypoxia, including greater light requirements, offer an alternative hypothesis for observed habitat compression and shoaling vertical distributions in visual marine species subject to ocean deoxygenation, which merits further investigation. This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'.
Effects of body lean and visual information on the equilibrium maintenance during stance.
Duarte, Marcos; Zatsiorsky, Vladimir M
2002-09-01
Maintenance of equilibrium was tested in conditions when humans assume different leaning postures during upright standing. Subjects ( n=11) stood in 13 different body postures specified by visual center of pressure (COP) targets within their base of support (BOS). Different types of visual information were tested: continuous presentation of visual target, no vision after target presentation, and with simultaneous visual feedback of the COP. The following variables were used to describe the equilibrium maintenance: the mean of the COP position, the area of the ellipse covering the COP sway, and the resultant median frequency of the power spectral density of the COP displacement. The variability of the COP displacement, quantified by the COP area variable, increased when subjects occupied leaning postures, irrespective of the kind of visual information provided. This variability also increased when vision was removed in relation to when vision was present. Without vision, drifts in the COP data were observed which were larger for COP targets farther away from the neutral position. When COP feedback was given in addition to the visual target, the postural control system did not control stance better than in the condition with only visual information. These results indicate that the visual information is used by the postural control system at both short and long time scales.
The Effects of Target Orientation on the Dynamic Contrast Sensitivity Function
1994-01-01
tests: A comparative evaluation. Journal of Amlied ychology, 50, 460-466. Burg, A. (1971). Vision and driving: A report on research. Human Factors...evaluation. Journal of Applied Ps ogy, 50, 460-466. Burg, A. (1971). Vision and driving: A report on research. Human Factors, 13, 79-87. Campbell, F. W...FUNDING NUMBERS 6. AUTHOR(S)G~ioD A- 0ecoKT, ) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER AFIT Student
Wilderness Vision Quest: A Journey of Transformation.
ERIC Educational Resources Information Center
Brown, Michael H.
The Wilderness Vision Quest is an outdoor retreat which helps participants touch, explore, and develop important latent human resources such as imagination, intuition, creativity, inspiration, and insight. Through the careful and focused use of techniques such as deep relaxation, reflective writing, visualization, guided imagery, symbolic drawing,…
Concept and design philosophy of a person-accompanying robot
NASA Astrophysics Data System (ADS)
Mizoguchi, Hiroshi; Shigehara, Takaomi; Goto, Yoshiyasu; Hidai, Ken-ichi; Mishima, Taketoshi
1999-01-01
This paper proposes a person accompanying robot as a novel human collaborative robot. The person accompanying robot is such legged mobile robot that is possible to follow the person utilizing its vision. towards future aging society, human collaboration and human support are required as novel applications of robots. Such human collaborative robots share the same space with humans. But conventional robots are isolated from humans and lack the capability to observe humans. Study on human observing function of robot is crucial to realize novel robot such as service and pet robot. To collaborate and support humans properly human collaborative robot must have capability to observe and recognize humans. Study on human observing function of robot is crucial to realize novel robot such as service and pet robot. The authors are currently implementing a prototype of the proposed accompanying robot.As a base for the human observing function of the prototype robot, we have realized face tracking utilizing skin color extraction and correlation based tracking. We also develop a method for the robot to pick up human voice clearly and remotely by utilizing microphone arrays. Results of these preliminary study suggest feasibility of the proposed robot.
Illumination-based synchronization of high-speed vision sensors.
Hou, Lei; Kagami, Shingo; Hashimoto, Koichi
2010-01-01
To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.
Software Simulates Sight: Flat Panel Mura Detection
NASA Technical Reports Server (NTRS)
2008-01-01
In the increasingly sophisticated world of high-definition flat screen monitors and television screens, image clarity and the elimination of distortion are paramount concerns. As the devices that reproduce images become more and more sophisticated, so do the technologies that verify their accuracy. By simulating the manner in which a human eye perceives and interprets a visual stimulus, NASA scientists have found ways to automatically and accurately test new monitors and displays. The Spatial Standard Observer (SSO) software metric, developed by Dr. Andrew B. Watson at Ames Research Center, measures visibility and defects in screens, displays, and interfaces. In the design of such a software tool, a central challenge is determining which aspects of visual function to include while accuracy and generality are important, relative simplicity of the software module is also a key virtue. Based on data collected in ModelFest, a large cooperative multi-lab project hosted by the Optical Society of America, the SSO simulates a simplified model of human spatial vision, operating on a pair of images that are viewed at a specific viewing distance with pixels having a known relation to luminance. The SSO measures the visibility of foveal spatial patterns, or the discriminability of two patterns, by incorporating only a few essential components of vision. These components include local contrast transformation, a contrast sensitivity function, local masking, and local pooling. By this construction, the SSO provides output in units of "just noticeable differences" (JND) a unit of measure based on the assumed smallest difference of sensory input detectable by a human being. Herein is the truly amazing ability of the SSO, while conventional methods can manipulate images, the SSO models human perception. This set of equations actually defines a mathematical way of working with an image that accurately reflects the way in which the human eye and mind behold a stimulus. The SSO is intended for a wide variety of applications, such as evaluating vision from unmanned aerial vehicles, measuring visibility of damage to aircraft and to the space shuttles, predicting outcomes of corrective laser eye surgery, inspecting displays during the manufacturing process, estimating the quality of compressed digital video, evaluating legibility of text, and predicting discriminability of icons or symbols in a graphical user interface.
NASA Astrophysics Data System (ADS)
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-06-01
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.
Arkenbout, Ewout A.; de Winter, Joost C. F.; Breedveld, Paul
2015-01-01
Vision based interfaces for human computer interaction have gained increasing attention over the past decade. This study presents a data fusion approach of the Nimble VR vision based system, using the Kinect camera, with the contact based 5DT Data Glove. Data fusion was achieved through a Kalman filter. The Nimble VR and filter output were compared using measurements performed on (1) a wooden hand model placed in various static postures and orientations; and (2) three differently sized human hands during active finger flexions. Precision and accuracy of joint angle estimates as a function of hand posture and orientation were determined. Moreover, in light of possible self-occlusions of the fingers in the Kinect camera images, data completeness was assessed. Results showed that the integration of the Data Glove through the Kalman filter provided for the proximal interphalangeal (PIP) joints of the fingers a substantial improvement of 79% in precision, from 2.2 deg to 0.9 deg. Moreover, a moderate improvement of 31% in accuracy (being the mean angular deviation from the true joint angle) was established, from 24 deg to 17 deg. The metacarpophalangeal (MCP) joint was relatively unaffected by the Kalman filter. Moreover, the Data Glove increased data completeness, thus providing a substantial advantage over the sole use of the Nimble VR system. PMID:26694395
Arkenbout, Ewout A; de Winter, Joost C F; Breedveld, Paul
2015-12-15
Vision based interfaces for human computer interaction have gained increasing attention over the past decade. This study presents a data fusion approach of the Nimble VR vision based system, using the Kinect camera, with the contact based 5DT Data Glove. Data fusion was achieved through a Kalman filter. The Nimble VR and filter output were compared using measurements performed on (1) a wooden hand model placed in various static postures and orientations; and (2) three differently sized human hands during active finger flexions. Precision and accuracy of joint angle estimates as a function of hand posture and orientation were determined. Moreover, in light of possible self-occlusions of the fingers in the Kinect camera images, data completeness was assessed. Results showed that the integration of the Data Glove through the Kalman filter provided for the proximal interphalangeal (PIP) joints of the fingers a substantial improvement of 79% in precision, from 2.2 deg to 0.9 deg. Moreover, a moderate improvement of 31% in accuracy (being the mean angular deviation from the true joint angle) was established, from 24 deg to 17 deg. The metacarpophalangeal (MCP) joint was relatively unaffected by the Kalman filter. Moreover, the Data Glove increased data completeness, thus providing a substantial advantage over the sole use of the Nimble VR system.
A Call for Nominations of Quantitative High-Throughput ...
The National Research Council of the United States National Academies of Science has recently released a document outlining a long-range vision and strategy for transforming toxicity testing from largely whole animal-based testing to one based on in vitro assays. “Toxicity Testing in the 21st Century: A Vision and a Strategy” advises a focus on relevant human toxicity pathway assays. Toxicity pathways are defined in the document as “Cellular response pathways that, when sufficiently perturbed, are expected to result in adverse health effects”. Results of such pathway screens would serve as a filter to drive selection of more specific, targeted testing that will complement and validate the pathway assays. In response to this report, the US EPA has partnered with two NIH organizations, the National Toxicology Program and the NIH Chemical Genomics Center (NCGC), in a program named Tox21. A major goal of this collaboration is to screen chemical libraries consisting of known toxicants, chemicals of environmental and occupational exposure concern, and human pharmaceuticals in cell-based pathway assays. Currently, approximately 3000 compounds (increasing to 9000 by the end of 2009) are being validated and screened in quantitative high-throughput (qHTS) format at the NCGC producing extensive concentration-response data for a diverse set of potential toxicity pathways. The Tox21 collaboration is extremely interested in accessing additional toxicity pathway assa
A computer vision based candidate for functional balance test.
Nalci, Alican; Khodamoradi, Alireza; Balkan, Ozgur; Nahab, Fatta; Garudadri, Harinath
2015-08-01
Balance in humans is a motor skill based on complex multimodal sensing, processing and control. Ability to maintain balance in activities of daily living (ADL) is compromised due to aging, diseases, injuries and environmental factors. Center for Disease Control and Prevention (CDC) estimate of the costs of falls among older adults was $34 billion in 2013 and is expected to reach $54.9 billion in 2020. In this paper, we present a brief review of balance impairments followed by subjective and objective tools currently used in clinical settings for human balance assessment. We propose a novel computer vision (CV) based approach as a candidate for functional balance test. The test will take less than a minute to administer and expected to be objective, repeatable and highly discriminative in quantifying ability to maintain posture and balance. We present an informal study with preliminary data from 10 healthy volunteers, and compare performance with a balance assessment system called BTrackS Balance Assessment Board. Our results show high degree of correlation with BTrackS. The proposed system promises to be a good candidate for objective functional balance tests and warrants further investigations to assess validity in clinical settings, including acute care, long term care and assisted living care facilities. Our long term goals include non-intrusive approaches to assess balance competence during ADL in independent living environments.
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-01-01
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-06-10
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.
Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles
Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro
2016-01-01
The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional–integral–derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle. PMID:27110793
Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles.
Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro
2016-04-22
The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional-integral-derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.
An improved silhouette for human pose estimation
NASA Astrophysics Data System (ADS)
Hawes, Anthony H.; Iftekharuddin, Khan M.
2017-08-01
We propose a novel method for analyzing images that exploits the natural lines of a human poses to find areas where self-occlusion could be present. Errors caused by self-occlusion cause several modern human pose estimation methods to mis-identify body parts, which reduces the performance of most action recognition algorithms. Our method is motivated by the observation that, in several cases, occlusion can be reasoned using only boundary lines of limbs. An intelligent edge detection algorithm based on the above principle could be used to augment the silhouette with information useful for pose estimation algorithms and push forward progress on occlusion handling for human action recognition. The algorithm described is applicable to computer vision scenarios involving 2D images and (appropriated flattened) 3D images.
Maclean, Carla L; Brimacombe, C A Elizabeth; Lindsay, D Stephen
2013-12-01
The current study addressed tunnel vision in industrial incident investigation by experimentally testing how a priori information and a human bias (generated via the fundamental attribution error or correspondence bias) affected participants' investigative behavior as well as the effectiveness of a debiasing intervention. Undergraduates and professional investigators engaged in a simulated industrial investigation exercise. We found that participants' judgments were biased by knowledge about the safety history of either a worker or piece of equipment and that a human bias was evident in participants' decision making. However, bias was successfully reduced with "tunnel vision education." Professional investigators demonstrated a greater sophistication in their investigative decision making compared to undergraduates. The similarities and differences between these two populations are discussed. (c) 2013 APA, all rights reserved
The contribution of stereo vision to the control of braking.
Tijtgat, Pieter; Mazyn, Liesbeth; De Laey, Christophe; Lenoir, Matthieu
2008-03-01
In this study the contribution of stereo vision to the control of braking in front of a stationary target vehicle was investigated. Participants with normal (StereoN) and weak (StereoW) stereo vision drove a go-cart along a linear track towards a stationary vehicle. They could start braking from a distance of 4, 7, or 10m from the vehicle. Deceleration patterns were measured by means of a laser. A lack of stereo vision was associated with an earlier onset of braking, but the duration of the braking manoeuvre was similar. During the deceleration, the time of peak deceleration occurred earlier in drivers with weak stereo vision. Stopping distance was greater in those lacking in stereo vision. A lack of stereo vision was associated with a more prudent brake behaviour, in which the driver took into account a larger safety margin. This compensation might be caused either by an unconscious adaptation of the human perceptuo-motor system, or by a systematic underestimation of distance remaining due to the lack of stereo vision. In general, a lack of stereo vision did not seem to increase the risk of rear-end collisions.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-12
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
NASA Astrophysics Data System (ADS)
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Mobile Autonomous Humanoid Assistant
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.
2004-01-01
A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1990-01-01
The visual perception of form information is considered to be based on the functioning of simple and complex neurons in the primate striate cortex. However, a review of the physiological data on these brain cells cannot be harmonized with either the perceptual spatial frequency performance of primates or the performance which is necessary for form perception in humans. This discrepancy together with recent interest in cortical-like and perceptual-like processing in image coding and machine vision prompted a series of image processing experiments intended to provide some definition of the selection of image operators. The experiments were aimed at determining operators which could be used to detect edges in a computational manner consistent with the visual perception of structure in images. Fundamental issues were the selection of size (peak spatial frequency) and circular versus oriented operators (or some combination). In a previous study, circular difference-of-Gaussian (DOG) operators, with peak spatial frequency responses at about 11 and 33 cyc/deg were found to capture the primary structural information in images. Here larger scale circular DOG operators were explored and led to severe loss of image structure and introduced spatial dislocations (due to blur) in structure which is not consistent with visual perception. Orientation sensitive operators (akin to one class of simple cortical neurons) introduced ambiguities of edge extent regardless of the scale of the operator. For machine vision schemes which are functionally similar to natural vision form perception, two circularly symmetric very high spatial frequency channels appear to be necessary and sufficient for a wide range of natural images. Such a machine vision scheme is most similar to the physiological performance of the primate lateral geniculate nucleus rather than the striate cortex.
Retinex at 50: color theory and spatial algorithms, a review
NASA Astrophysics Data System (ADS)
McCann, John J.
2017-05-01
Retinex Imaging shares two distinct elements: first, a model of human color vision; second, a spatial-imaging algorithm for making better reproductions. Edwin Land's 1964 Retinex Color Theory began as a model of human color vision of real complex scenes. He designed many experiments, such as Color Mondrians, to understand why retinal cone quanta catch fails to predict color constancy. Land's Retinex model used three spatial channels (L, M, S) that calculated three independent sets of monochromatic lightnesses. Land and McCann's lightness model used spatial comparisons followed by spatial integration across the scene. The parameters of their model were derived from extensive observer data. This work was the beginning of the second Retinex element, namely, using models of spatial vision to guide image reproduction algorithms. Today, there are many different Retinex algorithms. This special section, "Retinex at 50," describes a wide variety of them, along with their different goals, and ground truths used to measure their success. This paper reviews (and provides links to) the original Retinex experiments and image-processing implementations. Observer matches (measuring appearances) have extended our understanding of how human spatial vision works. This paper describes a collection very challenging datasets, accumulated by Land and McCann, for testing algorithms that predict appearance.
Emotion-Induced Trade-Offs in Spatiotemporal Vision
ERIC Educational Resources Information Center
Bocanegra, Bruno R.; Zeelenberg, Rene
2011-01-01
It is generally assumed that emotion facilitates human vision in order to promote adaptive responses to a potential threat in the environment. Surprisingly, we recently found that emotion in some cases impairs the perception of elementary visual features (Bocanegra & Zeelenberg, 2009b). Here, we demonstrate that emotion improves fast temporal…
Classification Objects, Ideal Observers & Generative Models
ERIC Educational Resources Information Center
Olman, Cheryl; Kersten, Daniel
2004-01-01
A successful vision system must solve the problem of deriving geometrical information about three-dimensional objects from two-dimensional photometric input. The human visual system solves this problem with remarkable efficiency, and one challenge in vision research is to understand how neural representations of objects are formed and what visual…
Institutional Vision and Academic Advising
ERIC Educational Resources Information Center
Abelman, Robert; Molina, Anthony D.
2006-01-01
Quality academic advising in higher education is the product of a multitude of elements not the least of which is institutional vision. By recognizing and embracing an institution's concept of its capabilities and the kinds of educated human beings it is attempting to cultivate, advisors gain an invaluable apparatus to guide the provision of…
75 FR 10808 - CIBA Vision Corp.; Withdrawal of Color Additive Petitions
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-09
...) CIBA Vision Corp.; Withdrawal of Color Additive Petitions AGENCY: Food and Drug Administration, HHS... Director, Office of Food Additive Safety, Center for Food Safety and Applied Nutrition. [FR Doc. 2010-4972... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket Nos. FDA-2005-C-0245...
Teaching for Humanity in a Neoliberal World: Visions of Education in Serbia
ERIC Educational Resources Information Center
Dull, Laura J.
2012-01-01
In Serbia, teachers and policy makers express different and sometimes competing visions of education. Teachers express their desire to "awaken" students by using progressive pedagogies, while European Union and World Bank reformers appropriate progressive education in the service of neoliberal goals. The research findings presented here…
NASA Astrophysics Data System (ADS)
Juday, Richard D.; Loshin, David S.
1989-06-01
We are investigating image coordinate transformations possibly to be used in a low vision aid for human patients. These patients typically have field defects with localized retinal dysfunction predominately central (age related maculopathy) or peripheral (retinitis pigmentosa). Previously we have shown simple eccentricity-only remappings which do not maintain conformality. In this report we present our initial attempts on developing images which hold quasi-conformality after remapping. Although the quasi-conformal images may have less local distortion, there are discontinuities in the image which may counterindicate this type of transformation for the low vision application.
Liquid lens: advances in adaptive optics
NASA Astrophysics Data System (ADS)
Casey, Shawn Patrick
2010-12-01
'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.
Deep Learning for Computer Vision: A Brief Review
Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios
2018-01-01
Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619
Machine vision system: a tool for quality inspection of food and agricultural products.
Patel, Krishna Kumar; Kar, A; Jha, S N; Khan, M A
2012-04-01
Quality inspection of food and agricultural produce are difficult and labor intensive. Simultaneously, with increased expectations for food products of high quality and safety standards, the need for accurate, fast and objective quality determination of these characteristics in food products continues to grow. However, these operations generally in India are manual which is costly as well as unreliable because human decision in identifying quality factors such as appearance, flavor, nutrient, texture, etc., is inconsistent, subjective and slow. Machine vision provides one alternative for an automated, non-destructive and cost-effective technique to accomplish these requirements. This inspection approach based on image analysis and processing has found a variety of different applications in the food industry. Considerable research has highlighted its potential for the inspection and grading of fruits and vegetables, grain quality and characteristic examination and quality evaluation of other food products like bakery products, pizza, cheese, and noodles etc. The objective of this paper is to provide in depth introduction of machine vision system, its components and recent work reported on food and agricultural produce.
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.
1994-01-01
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
NASA Astrophysics Data System (ADS)
Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.
1994-02-01
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.
Assessment of OLED displays for vision research.
Cooper, Emily A; Jiang, Haomiao; Vildavski, Vladimir; Farrell, Joyce E; Norcia, Anthony M
2013-10-23
Vision researchers rely on visual display technology for the presentation of stimuli to human and nonhuman observers. Verifying that the desired and displayed visual patterns match along dimensions such as luminance, spectrum, and spatial and temporal frequency is an essential part of developing controlled experiments. With cathode-ray tubes (CRTs) becoming virtually unavailable on the commercial market, it is useful to determine the characteristics of newly available displays based on organic light emitting diode (OLED) panels to determine how well they may serve to produce visual stimuli. This report describes a series of measurements summarizing the properties of images displayed on two commercially available OLED displays: the Sony Trimaster EL BVM-F250 and PVM-2541. The results show that the OLED displays have large contrast ratios, wide color gamuts, and precise, well-behaved temporal responses. Correct adjustment of the settings on both models produced luminance nonlinearities that were well predicted by a power function ("gamma correction"). Both displays have adjustable pixel independence and can be set to have little to no spatial pixel interactions. OLED displays appear to be a suitable, or even preferable, option for many vision research applications.
Rasmussen, Rune Skovgaard; Schaarup, Anne Marie Heltoft; Overgaard, Karsten
2018-02-27
Serious and often lasting vision impairments affect 30% to 35% of people following stroke. Vision may be considered the most important sense in humans, and even smaller permanent injuries can drastically reduce quality of life. Restoration of visual field impairments occur only to a small extent during the first month after brain damage, and therefore the time window for spontaneous improvements is limited. One month after brain injury causing visual impairment, patients usually will experience chronically impaired vision and the need for compensatory vision rehabilitation is substantial. The purpose of this study is to investigate whether rehabilitation with Neuro Vision Technology will result in a significant and lasting improvement in functional capacity in persons with chronic visual impairments after brain injury. Improving eyesight is expected to increase both physical and mental functioning, thus improving the quality of life. This is a prospective open label trial in which participants with chronic visual field impairments are examined before and after the intervention. Participants typically suffer from stroke or traumatic brain injury and will be recruited from hospitals and The Institute for the Blind and Partially Sighted. Treatment is based on Neuro Vision Technology, which is a supervised training course, where participants are trained in compensatory techniques using specially designed equipment. Through the Neuro Vision Technology procedure, the vision problems of each individual are carefully investigated, and personal data is used to organize individual training sessions. Cognitive face-to-face assessments and self-assessed questionnaires about both life and vision quality are also applied before and after the training. Funding was provided in June 2017. Results are expected to be available in 2020. Sample size is calculated to 23 participants. Due to age, difficulty in transport, and the time-consuming intervention, up to 25% dropouts are expected; thus, we aim to include at least 29 participants. This investigation will evaluate the effects of Neuro Vision Technology therapy on compensatory vision rehabilitation. Additionally, quality of life and cognitive improvements associated to increased quality of life will be explored. ClinicalTrials.gov NCT03160131; https://clinicaltrials.gov/ct2/show/NCT03160131 (Archived by WebCite at http://www.webcitation.org/6x3f5HnCv). ©Rune Skovgaard Rasmussen, Anne Marie Heltoft Schaarup, Karsten Overgaard. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 27.02.2018.
Evidence against global attention filters selective for absolute bar-orientation in human vision.
Inverso, Matthew; Sun, Peng; Chubb, Charles; Wright, Charles E; Sperling, George
2016-01-01
The finding that an item of type A pops out from an array of distractors of type B typically is taken to support the inference that human vision contains a neural mechanism that is activated by items of type A but not by items of type B. Such a mechanism might be expected to yield a neural image in which items of type A produce high activation and items of type B low (or zero) activation. Access to such a neural image might further be expected to enable accurate estimation of the centroid of an ensemble of items of type A intermixed with to-be-ignored items of type B. Here, it is shown that as the number of items in stimulus displays is increased, performance in estimating the centroids of horizontal (vertical) items amid vertical (horizontal) distractors degrades much more quickly and dramatically than does performance in estimating the centroids of white (black) items among black (white) distractors. Together with previous findings, these results suggest that, although human vision does possess bottom-up neural mechanisms sensitive to abrupt local changes in bar-orientation, and although human vision does possess and utilize top-down global attention filters capable of selecting multiple items of one brightness or of one color from among others, it cannot use a top-down global attention filter capable of selecting multiple bars of a given absolute orientation and filtering bars of the opposite orientation in a centroid task.
Visual Motion Perception and Visual Attentive Processes.
1988-04-01
88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical
Operational Based Vision Assessment Cone Contrast Test: Description and Operation
2016-06-01
designed to detect abnormalities and characterize the contrast sensitivity of the color mechanisms of the human visual system. The OBVA CCT will...than 1, the individual is determined to have an abnormal L-M mechanism. The L-M sensitivity of mildly abnormal individuals (anomalous trichromats...response pads. This hardware is integrated with custom software that generates the stimuli, collects responses, and analyzes the results as outlined in
ERIC Educational Resources Information Center
Ross, Ann; Olsen, Karen
This book proposes a new view of middle school curriculum and the specifics of how to create such a curriculum. It introduces the notion that the approach to developing a curriculum appropriate for adolescents should be based on similarities of human learning at different ages, as well as the uniqueness of the adolescent. Chapter 1 highlights six…
Image Understanding Architecture
1991-09-01
architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers
Dimensional coordinate measurements: application in characterizing cervical spine motion
NASA Astrophysics Data System (ADS)
Zheng, Weilong; Li, Linan; Wang, Shibin; Wang, Zhiyong; Shi, Nianke; Xue, Yuan
2014-06-01
Cervical spine as a complicated part in the human body, the form of its movement is diverse. The movements of the segments of vertebrae are three-dimensional, and it is reflected in the changes of the angle between two joint and the displacement in different directions. Under normal conditions, cervical can flex, extend, lateral flex and rotate. For there is no relative motion between measuring marks fixed on one segment of cervical vertebra, the cervical vertebrae with three marked points can be seen as a body. Body's motion in space can be decomposed into translational movement and rotational movement around a base point .This study concerns the calculation of dimensional coordinate of the marked points pasted to the human body's cervical spine by an optical method. Afterward, these measures will allow the calculation of motion parameters for every spine segment. For this study, we choose a three-dimensional measurement method based on binocular stereo vision. The object with marked points is placed in front of the CCD camera. Through each shot, we will get there two parallax images taken from different cameras. According to the principle of binocular vision we can be realized three-dimensional measurements. Cameras are erected parallelly. This paper describes the layout of experimental system and a mathematical model to get the coordinates.
Expanding Dimensionality in Cinema Color: Impacting Observer Metamerism through Multiprimary Display
NASA Astrophysics Data System (ADS)
Long, David L.
Television and cinema display are both trending towards greater ranges and saturation of reproduced colors made possible by near-monochromatic RGB illumination technologies. Through current broadcast and digital cinema standards work, system designs employing laser light sources, narrow-band LED, quantum dots and others are being actively endorsed in promotion of Wide Color Gamut (WCG). Despite artistic benefits brought to creative content producers, spectrally selective excitations of naturally different human color response functions exacerbate variability of observer experience. An exaggerated variation in color-sensing is explicitly counter to the exhaustive controls and calibrations employed in modern motion picture pipelines. Further, singular standard observer summaries of human color vision such as found in the CIE's 1931 and 1964 color matching functions and used extensively in motion picture color management are deficient in recognizing expected human vision variability. Many researchers have confirmed the magnitude of observer metamerism in color matching in both uniform colors and imagery but few have shown explicit color management with an aim of minimized difference in observer perception variability. This research shows that not only can observer metamerism influences be quantitatively predicted and confirmed psychophysically but that intentionally engineered multiprimary displays employing more than three primaries can offer increased color gamut with drastically improved consistency of experience. To this end, a seven-channel prototype display has been constructed based on observer metamerism models and color difference indices derived from the latest color vision demographic research. This display has been further proven in forced-choice paired comparison tests to deliver superior color matching to reference stimuli versus both contemporary standard RGB cinema projection and recently ratified standard laser projection across a large population of color-normal observers.
A new generation of intelligent trainable tools for analyzing large scientific image databases
NASA Technical Reports Server (NTRS)
Fayyad, Usama M.; Smyth, Padhraic; Atkinson, David J.
1994-01-01
The focus of this paper is on the detection of natural, as opposed to human-made, objects. The distinction is important because, in the context of image analysis, natural objects tend to possess much greater variability in appearance than human-made objects. Hence, we shall focus primarily on the use of algorithms that 'learn by example' as the basis for image exploration. The 'learn by example' approach is potentially more generally applicable compared to model-based vision methods since domain scientists find it relatively easier to provide examples of what they are searching for versus describing a model.
A Graphical Operator Interface for a Telerobotic Inspection System
NASA Technical Reports Server (NTRS)
Kim, W. S.; Tso, K. S.; Hayati, S.
1993-01-01
Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.
Visual adaptation and face perception
Webster, Michael A.; MacLeod, Donald I. A.
2011-01-01
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
Visual adaptation and face perception.
Webster, Michael A; MacLeod, Donald I A
2011-06-12
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces.
Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision
NASA Astrophysics Data System (ADS)
Gai, Qiyang
2018-01-01
Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.
Color universal design: analysis of color category dependency on color vision type (4)
NASA Astrophysics Data System (ADS)
Ikeda, Tomohiro; Ichihara, Yasuyo G.; Kojima, Natsuki; Tanaka, Hisaya; Ito, Kei
2013-02-01
This report is af ollow-up to SPIE-IS+T / Vol. 7528 7528051-8, SPIE-IS+T / Vol. 7866 78660J-1-8 and SPIE-IS+T / Vol. 8292 829206-1-8. Colors are used to communicate information in various situations, not just for design and apparel. However, visual information given only by color may be perceived differently by individuals with different color vision types. Human color vision is non-uniform and the variation in most cases is genetically linked to L-cones and M-cones. Therefore, color appearance is not the same for all color vision types. Color Universal Design is an easy-to-understand system that was created to convey color-coded information accurately to most people, taking color vision types into consideration. In the present research, we studied trichromat (C-type), prolan (P-type), and deutan (D-type) forms of color vision. We here report the result of two experiments. The first was the validation of the confusion colors using the color chart on CIELAB uniform color space. We made an experimental color chart (total of color cells is 622, the color difference between color cells is 2.5) for fhis experiment, and subjects have P-type or D-type color vision. From the data we were able to determine "the limits with high probability of confusion" and "the limits with possible confusion" around various basing points. The direction of the former matched with the theoretical confusion locus, but the range did not extend across the entire a* range. The latter formed a belt-like zone above and below the theoretical confusion locus. This way we re-analyzed a part of the theoretical confusion locus suggested by Pitt-Judd. The second was an experiment in color classification of the subjects with C-type, P-type, or D-type color vision. The color caps of fhe 100 Hue Test were classified into seven categories for each color vision type. The common and different points of color sensation were compared for each color vision type, and we were able to find a group of color caps fhat people with C-, P-, and D-types could all recognize as distinguishable color categories. The result could be used as the basis of a color scheme for future Color Universal Design.
Vision Screening for Children 36 to <72 Months: Recommended Practices
Cotter, Susan A.; Cyert, Lynn A.; Miller, Joseph M.; Quinn, Graham E.
2015-01-01
ABSTRACT Purpose This article provides recommendations for screening children aged 36 to younger than 72 months for eye and visual system disorders. The recommendations were developed by the National Expert Panel to the National Center for Children’s Vision and Eye Health, sponsored by Prevent Blindness, and funded by the Maternal and Child Health Bureau of the Health Resources and Services Administration, United States Department of Health and Human Services. The recommendations describe both best and acceptable practice standards. Targeted vision disorders for screening are primarily amblyopia, strabismus, significant refractive error, and associated risk factors. The recommended screening tests are intended for use by lay screeners, nurses, and other personnel who screen children in educational, community, public health, or primary health care settings. Characteristics of children who should be examined by an optometrist or ophthalmologist rather than undergo vision screening are also described. Results There are two current best practice vision screening methods for children aged 36 to younger than 72 months: (1) monocular visual acuity testing using single HOTV letters or LEA Symbols surrounded by crowding bars at a 5-ft (1.5 m) test distance, with the child responding by either matching or naming, or (2) instrument-based testing using the Retinomax autorefractor or the SureSight Vision Screener with the Vision in Preschoolers Study data software installed (version 2.24 or 2.25 set to minus cylinder form). Using the Plusoptix Photoscreener is acceptable practice, as is adding stereoacuity testing using the PASS (Preschool Assessment of Stereopsis with a Smile) stereotest as a supplemental procedure to visual acuity testing or autorefraction. Conclusions The National Expert Panel recommends that children aged 36 to younger than 72 months be screened annually (best practice) or at least once (accepted minimum standard) using one of the best practice approaches. Technological updates will be maintained at http://nationalcenter.preventblindness.org. PMID:25562476
Human Factors Engineering as a System in the Vision for Exploration
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Smith, Danielle; Holden, Kritina
2006-01-01
In order to accomplish NASA's Vision for Exploration, while assuring crew safety and productivity, human performance issues must be well integrated into system design from mission conception. To that end, a two-year Technology Development Project (TDP) was funded by NASA Headquarters to develop a systematic method for including the human as a system in NASA's Vision for Exploration. The specific goals of this project are to review current Human Systems Integration (HSI) standards (i.e., industry, military, NASA) and tailor them to selected NASA Exploration activities. Once the methods are proven in the selected domains, a plan will be developed to expand the effort to a wider scope of Exploration activities. The methods will be documented for inclusion in NASA-specific documents (such as the Human Systems Integration Standards, NASA-STD-3000) to be used in future space systems. The current project builds on a previous TDP dealing with Human Factors Engineering processes. That project identified the key phases of the current NASA design lifecycle, and outlined the recommended HFE activities that should be incorporated at each phase. The project also resulted in a prototype of a webbased HFE process tool that could be used to support an ideal HFE development process at NASA. This will help to augment the limited human factors resources available by providing a web-based tool that explains the importance of human factors, teaches a recommended process, and then provides the instructions, templates and examples to carry out the process steps. The HFE activities identified by the previous TDP are being tested in situ for the current effort through support to a specific NASA Exploration activity. Currently, HFE personnel are working with systems engineering personnel to identify HSI impacts for lunar exploration by facilitating the generation of systemlevel Concepts of Operations (ConOps). For example, medical operations scenarios have been generated for lunar habitation in order to identify HSI requirements for the lunar communications architecture. Throughout these ConOps exercises, HFE personnel are testing various tools and methodologies that have been identified in the literature. A key part of the effort is the identification of optimal processes, methods, and tools for these early development phase activities, such as ConOps, requirements development, and early conceptual design. An overview of the activities completed thus far, as well as the tools and methods investigated will be presented.
An approach to integrate the human vision psychology and perception knowledge into image enhancement
NASA Astrophysics Data System (ADS)
Wang, Hui; Huang, Xifeng; Ping, Jiang
2009-07-01
Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.
Action spectrum for melatonin regulation in humans: evidence for a novel circadian photoreceptor
NASA Technical Reports Server (NTRS)
Brainard, G. C.; Hanifin, J. P.; Greeson, J. M.; Byrne, B.; Glickman, G.; Gerner, E.; Rollag, M. D.
2001-01-01
The photopigment in the human eye that transduces light for circadian and neuroendocrine regulation, is unknown. The aim of this study was to establish an action spectrum for light-induced melatonin suppression that could help elucidate the ocular photoreceptor system for regulating the human pineal gland. Subjects (37 females, 35 males, mean age of 24.5 +/- 0.3 years) were healthy and had normal color vision. Full-field, monochromatic light exposures took place between 2:00 and 3:30 A.M. while subjects' pupils were dilated. Blood samples collected before and after light exposures were quantified for melatonin. Each subject was tested with at least seven different irradiances of one wavelength with a minimum of 1 week between each nighttime exposure. Nighttime melatonin suppression tests (n = 627) were completed with wavelengths from 420 to 600 nm. The data were fit to eight univariant, sigmoidal fluence-response curves (R(2) = 0.81-0.95). The action spectrum constructed from these data fit an opsin template (R(2) = 0.91), which identifies 446-477 nm as the most potent wavelength region providing circadian input for regulating melatonin secretion. The results suggest that, in humans, a single photopigment may be primarily responsible for melatonin suppression, and its peak absorbance appears to be distinct from that of rod and cone cell photopigments for vision. The data also suggest that this new photopigment is retinaldehyde based. These findings suggest that there is a novel opsin photopigment in the human eye that mediates circadian photoreception.
Machine vision based teleoperation aid
NASA Technical Reports Server (NTRS)
Hoff, William A.; Gatrell, Lance B.; Spofford, John R.
1991-01-01
When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid.
29 CFR 1630.10 - Qualification standards, tests, and other selection criteria.
Code of Federal Regulations, 2011 CFR
2011-07-01
... business necessity. (b) Qualification standards and tests related to uncorrected vision. Notwithstanding..., or other selection criteria based on an individual's uncorrected vision unless the standard, test, or... application of a qualification standard, test, or other criterion based on uncorrected vision need not be a...
Chiang, Peggy Pei-Chia; Xie, Jing; Keeffe, Jill Elizabeth
2011-04-25
To identify the critical success factors (CSF) associated with coverage of low vision services. Data were collected from a survey distributed to Vision 2020 contacts, government, and non-government organizations (NGOs) in 195 countries. The Classification and Regression Tree Analysis (CART) was used to identify the critical success factors of low vision service coverage. Independent variables were sourced from the survey: policies, epidemiology, provision of services, equipment and infrastructure, barriers to services, human resources, and monitoring and evaluation. Socioeconomic and demographic independent variables: health expenditure, population statistics, development status, and human resources in general, were sourced from the World Health Organization (WHO), World Bank, and the United Nations (UN). The findings identified that having >50% of children obtaining devices when prescribed (χ(2) = 44; P < 0.000), multidisciplinary care (χ(2) = 14.54; P = 0.002), >3 rehabilitation workers per 10 million of population (χ(2) = 4.50; P = 0.034), higher percentage of population urbanized (χ(2) = 14.54; P = 0.002), a level of private investment (χ(2) = 14.55; P = 0.015), and being fully funded by government (χ(2) = 6.02; P = 0.014), are critical success factors associated with coverage of low vision services. This study identified the most important predictors for countries with better low vision coverage. The CART is a useful and suitable methodology in survey research and is a novel way to simplify a complex global public health issue in eye care.
From spectral information to animal colour vision: experiments and concepts.
Kelber, Almut; Osorio, Daniel
2010-06-07
Many animals use the spectral distribution of light to guide behaviour, but whether they have colour vision has been debated for over a century. Our strong subjective experience of colour and the fact that human vision is the paradigm for colour science inevitably raises the question of how we compare with other species. This article outlines four grades of 'colour vision' that can be related to the behavioural uses of spectral information, and perhaps to the underlying mechanisms. In the first, even without an (image-forming) eye, simple organisms can compare photoreceptor signals to locate a desired light environment. At the next grade, chromatic mechanisms along with spatial vision guide innate preferences for objects such as food or mates; this is sometimes described as wavelength-specific behaviour. Here, we compare the capabilities of di- and trichromatic vision, and ask why some animals have more than three spectral types of receptors. Behaviours guided by innate preferences are then distinguished from a grade that allows learning, in part because the ability to learn an arbitrary colour is evidence for a neural representation of colour. The fourth grade concerns colour appearance rather than colour difference: for instance, the distinction between hue and saturation, and colour categorization. These higher-level phenomena are essential to human colour perception but poorly known in animals, and we suggest how they can be studied. Finally, we observe that awareness of colour and colour qualia cannot be easily tested in animals.
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.
Orchard, Garrick; Jayawant, Ajinkya; Cohen, Gregory K; Thakor, Nitish
2015-01-01
Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.
Human vision is attuned to the diffuseness of natural light
Morgenstern, Yaniv; Geisler, Wilson S.; Murray, Richard F.
2014-01-01
All images are highly ambiguous, and to perceive 3-D scenes, the human visual system relies on assumptions about what lighting conditions are most probable. Here we show that human observers' assumptions about lighting diffuseness are well matched to the diffuseness of lighting in real-world scenes. We use a novel multidirectional photometer to measure lighting in hundreds of environments, and we find that the diffuseness of natural lighting falls in the same range as previous psychophysical estimates of the visual system's assumptions about diffuseness. We also find that natural lighting is typically directional enough to override human observers' assumption that light comes from above. Furthermore, we find that, although human performance on some tasks is worse in diffuse light, this can be largely accounted for by intrinsic task difficulty. These findings suggest that human vision is attuned to the diffuseness levels of natural lighting conditions. PMID:25139864
Grounding Our Vision: Brain Research and Strategic Vision
ERIC Educational Resources Information Center
Walker, Mike
2011-01-01
While recognizing the value of "vision," it could be argued that vision alone--at least in schools--is not enough to rally the financial and emotional support required to translate an idea into reality. A compelling vision needs to reflect substantive, research-based knowledge if it is to spark the kind of strategic thinking and insight…
Playing to Your Strengths: Appreciative Inquiry in the Visioning Process
ERIC Educational Resources Information Center
Fifolt, Matthew; Stowe, Angela M.
2011-01-01
"Appreciative Inquiry" (AI) is a structured approach to visioning focused on reflection, introspection, and collaboration. Rooted in organizational behavior theory, AI was introduced in the early 1980s as a life-centric approach to human systems (Watkins and Mohr 2001). Since then, AI has been used widely within the business community;…
Prophet of the Storm: Richard Wright and the Radical Tradition
ERIC Educational Resources Information Center
Campbell, Finley C.
1977-01-01
Asserts that Wright's importance emerges out of his literary identification with a specific sociological vision of the life and destiny of black people in particular and human kind in general. Examines this vision interpretatively as a philosophy, as a historical trend in American culture, and as a personal and ideological commitment forming the…
76 FR 37690 - CooperVision, Inc.; Filing of Color Additive Petitions
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-28
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 73 [Docket Nos. FDA-2011-C-0344 and FDA-2011-C-0463] CooperVision, Inc.; Filing of Color Additive Petitions AGENCY... additive regulations be amended to provide for the safe use of 1,4- bis[4-(2-methacryloxyethyl)phenlyamino...
76 FR 49707 - CooperVision, Inc.; Filing of Color Additive Petitions
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-11
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 73 [Docket Nos. FDA-2011-C-0344 and FDA-2011-C-0463] CooperVision, Inc.; Filing of Color Additive Petitions Correction In proposed rule document 2011-16089 appearing on page 37690 in the issue of Tuesday, June 28, 2011...
Two-Eyed Seeing into Environmental Education: Revealing Its "Natural" Readiness to Indigenize
ERIC Educational Resources Information Center
McKeon, Margaret
2012-01-01
Recent visions for environmental education now include a foundational acknowledgement that the well-being of humans and the environment are inseparable. This vision of environmental education, with a focus on interconnectedness as well as concepts of transformation, holism, caring, and responsibility, rooted in experiences of nature, community,…
ERIC Educational Resources Information Center
Gidley, Jennifer M.
2007-01-01
Rudolf Steiner and Ken Wilber claim that human consciousness is evolving beyond the "formal", abstract, intellectual mode toward a "post-formal", integral mode. Wilber calls this "vision-logic" and Steiner calls it "consciousness/spiritual soul". Both point to the emergence of more complex, dialectical,…
78 FR 66027 - Center for Scientific Review; Amended Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-04
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Center for Scientific Review; Amended Notice of Meeting Notice is hereby given of a change in the meeting of the Bioengineering of Neuroscience, Vision and Low Vision Technologies Study Section, October 03, 2013, 08:00 a.m. to October 04...
Machine vision for digital microfluidics
NASA Astrophysics Data System (ADS)
Shin, Yong-Jun; Lee, Jeong-Bong
2010-01-01
Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.
Cocchioni, M; Scuri, S; Morichetti, L; Petrelli, F; Grappasonni, I
2006-01-01
The article underlines the fundamental importance of the protection and promotion of environmental quality for the human health. The evolution of fluvial monitoring techniques is contemplated from chemical and bacteriological analysis until the Index Functional Index (I.F.F). This evolution it's very important because shows a new methodological and cultural maturation that has carried from a anthropocentric vision until an ecocentric vision. The target of this ecological vision is the re-establishment of ecological functionality of the rivers, eliminating the consumer's vision of the water considered only as a usable resource. The importance of an correct monitoring of a river is confirmed, even though the preventive approach priority remains.
Bioelectronic retinal prosthesis
NASA Astrophysics Data System (ADS)
Weiland, James D.
2016-05-01
Retinal prosthesis have been translated to clinical use over the past two decades. Currently, two devices have regulatory approval for the treatment of retinitis pigmentosa and one device is in clinical trials for treatment of age-related macular degeneration. These devices provide partial sight restoration and patients use this improved vision in their everyday lives to navigate and to detect large objects. However, significant vision restoration will require both better technology and improved understanding of the interaction between electrical stimulation and the retina. In particular, current retinal prostheses do not provide peripheral visions due to technical and surgical limitations, thus limiting the effectiveness of the treatment. This paper reviews recent results from human implant patients and presents technical approaches for peripheral vision.
ERIC Educational Resources Information Center
Stürmer, Kathleen; Könings, Karen D.; Seidel, Tina
2015-01-01
Preservice teachers' professional vision is an important indicator of their initial acquisition of integrated knowledge structures within university-based teacher education. To date, empirical research investigating which factors contribute to explaining preservice teachers' professional vision is scarce. This study aims to determine which factors…
Christy, Beula; Keeffe, Jill E; Nirmalan, Praveen K; Rao, Gullapalli N
2010-08-01
To design a randomized controlled trial (RCT) to compare the effectiveness of four different strategies to deliver low vision rehabilitation services. The four arms of the RCT comprised-center based rehabilitation, home based rehabilitation, a mix of center based and home based rehabilitation, and center based rehabilitation with home based non interventional supplementary visits by rehabilitation workers. Outcomes were assessed 9 months after baseline and included measuring changes in adaptation to age-related vision loss, quality of life, impact of vision impairment and effectiveness of low vision rehabilitation training. The socio-demographic and vision characteristics of the sample in each of the 4 arms were compared to ensure that outcomes are not associated with differences between the groups. Four hundred and thirty six individuals were enrolled in the study; 393 individuals completed the study. One-fifth of participants were children aged 8 to 16 years. At baseline, socio-demographic and clinical characteristics were similar between individuals in the four arms of the trial. Socio-demographic and clinical characteristics did not differ significantly, except for age, between the 393 individuals who completed the trial and the 43 individuals who dropped out of the study. Twenty six (60.46%) of the forty three drop outs were from the center based arm of the trial. Information from this trial has the potential to shape policy and practice pertaining to low vision rehabilitation services.
NASA Astrophysics Data System (ADS)
Koenderink, Jan
2011-03-01
The egg-rolling behavior of the graylag goose is an often quoted example of a fixed-action pattern. The bird will even attempt to roll a brick back to its nest! Despite excellent visual acuity it apparently takes a brick for an egg." Evolution optimizes utility, not veridicality. Yet textbooks take it for a fact that human vision evolved so as to approach veridical perception. How do humans manage to dodge the laws of evolution? I will show that they don't, but that human vision is an idiosyncratic user interface. By way of an example I consider the case of pictorial perception. Gleaning information from still images is an important human ability and is likely to remain so for the foreseeable future. I will discuss a number of instances of extreme non-veridicality and huge inter-observer variability. Despite their importance in applications (information dissemination, personnel selection,...) such huge effects have remained undocumented in the literature, although they can be traced to artistic conventions. The reason appears to be that conventional psychophysics-by design-fails to address the qualitative, that is the meaningful, aspects of visual awareness whereas this is the very target of the visual arts.
Pons, Carmen; Mazade, Reece; Jin, Jianzhong; Dul, Mitchell W; Zaidi, Qasim; Alonso, Jose-Manuel
2017-12-01
Artists and astronomers noticed centuries ago that humans perceive dark features in an image differently from light ones; however, the neuronal mechanisms underlying these dark/light asymmetries remained unknown. Based on computational modeling of neuronal responses, we have previously proposed that such perceptual dark/light asymmetries originate from a luminance/response saturation within the ON retinal pathway. Consistent with this prediction, here we show that stimulus conditions that increase ON luminance/response saturation (e.g., dark backgrounds) or its effect on light stimuli (e.g., optical blur) impair the perceptual discrimination and salience of light targets more than dark targets in human vision. We also show that, in cat visual cortex, the magnitude of the ON luminance/response saturation remains relatively constant under a wide range of luminance conditions that are common indoors, and only shifts away from the lowest luminance contrasts under low mesopic light. Finally, we show that the ON luminance/response saturation affects visual salience mostly when the high spatial frequencies of the image are reduced by poor illumination or optical blur. Because both low luminance and optical blur are risk factors in myopia, our results suggest a possible neuronal mechanism linking myopia progression with the function of the ON visual pathway.
Pons, Carmen; Mazade, Reece; Jin, Jianzhong; Dul, Mitchell W.; Zaidi, Qasim; Alonso, Jose-Manuel
2017-01-01
Artists and astronomers noticed centuries ago that humans perceive dark features in an image differently from light ones; however, the neuronal mechanisms underlying these dark/light asymmetries remained unknown. Based on computational modeling of neuronal responses, we have previously proposed that such perceptual dark/light asymmetries originate from a luminance/response saturation within the ON retinal pathway. Consistent with this prediction, here we show that stimulus conditions that increase ON luminance/response saturation (e.g., dark backgrounds) or its effect on light stimuli (e.g., optical blur) impair the perceptual discrimination and salience of light targets more than dark targets in human vision. We also show that, in cat visual cortex, the magnitude of the ON luminance/response saturation remains relatively constant under a wide range of luminance conditions that are common indoors, and only shifts away from the lowest luminance contrasts under low mesopic light. Finally, we show that the ON luminance/response saturation affects visual salience mostly when the high spatial frequencies of the image are reduced by poor illumination or optical blur. Because both low luminance and optical blur are risk factors in myopia, our results suggest a possible neuronal mechanism linking myopia progression with the function of the ON visual pathway. PMID:29196762
NASA Astrophysics Data System (ADS)
Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling
2017-09-01
In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters
Adaptive control for eye-gaze input system
NASA Astrophysics Data System (ADS)
Zhao, Qijie; Tu, Dawei; Yin, Hairong
2004-01-01
The characteristics of the vision-based human-computer interaction system have been analyzed, and the practical application and its limited factors at present time have also been mentioned. The information process methods have been put forward. In order to make the communication flexible and spontaneous, the algorithms to adaptive control of user"s head movement has been designed, and the events-based methods and object-oriented computer language is used to develop the system software, by experiment testing, we found that under given condition, these methods and algorithms can meet the need of the HCI.
Feasibility of a clinical trial of vision therapy for treatment of amblyopia.
Lyon, Don W; Hopkins, Kristine; Chu, Raymond H; Tamkins, Susanna M; Cotter, Susan A; Melia, B Michele; Holmes, Jonathan M; Repka, Michael X; Wheeler, David T; Sala, Nicholas A; Dumas, Janette; Silbert, David I
2013-05-01
We conducted a pilot randomized clinical trial of office-based active vision therapy for the treatment of childhood amblyopia to determine the feasibility of conducting a full-scale randomized clinical trial. A training and certification program and manual of procedures were developed to certify therapists to administer a standardized vision therapy program in ophthalmology and optometry offices consisting of weekly visits for 16 weeks. Nineteen children, aged 7 to less than 13 years, with amblyopia (20/40-20/100) were randomly assigned to receive either 2 hours of daily patching with active vision therapy or 2 hours of daily patching with placebo vision therapy. Therapists in diverse practice settings were successfully trained and certified to perform standardized vision therapy in strict adherence with protocol. Subjects completed 85% of required weekly in-office vision therapy visits. Eligibility criteria based on age, visual acuity, and stereoacuity, designed to identify children able to complete a standardized vision therapy program and judged likely to benefit from this treatment, led to a high proportion of screened subjects being judged ineligible, resulting in insufficient recruitment. There were difficulties in retrieving adherence data for the computerized home therapy procedures. This study demonstrated that a 16-week treatment trial of vision therapy was feasible with respect to maintaining protocol adherence; however, recruitment under the proposed eligibility criteria, necessitated by the standardized approach to vision therapy, was not successful. A randomized clinical trial of in-office vision therapy for the treatment of amblyopia would require broadening of the eligibility criteria and improved methods to gather objective data regarding the home therapy. A more flexible approach that customizes vision therapy based on subject age, visual acuity, and stereopsis might be required to allow enrollment of a broader group of subjects.
Feasibility of a Clinical Trial of Vision Therapy for Treatment of Amblyopia
Lyon, Don W.; Hopkins, Kristine; Chu, Raymond H.; Tamkins, Susanna M.; Cotter, Susan A.; Melia, B. Michele; Holmes, Jonathan M.; Repka, Michael X.; Wheeler, David T.; Sala, Nicholas A.; Dumas, Janette; Silbert, David I.
2013-01-01
Purpose We conducted a pilot randomized clinical trial of office-based active vision therapy for the treatment of childhood amblyopia to determine the feasibility of conducting a full-scale randomized clinical trial. Methods A training and certification program and manual of procedures were developed to certify therapists to administer a standardized vision therapy program in ophthalmology and optometry offices consisting of weekly visits for 16 weeks. Nineteen children, 7 to less than 13 years of age, with amblyopia (20/40–20/100) were randomly assigned to receive either 2 hours of daily patching with active vision therapy or 2 hours of daily patching with placebo vision therapy. Results Therapists in diverse practice settings were successfully trained and certified to perform standardized vision therapy in strict adherence with protocol. Subjects completed 85% of required weekly in-office vision therapy visits. Eligibility criteria based on age, visual acuity, and stereoacuity, designed to identify children able to complete a standardized vision therapy program and judged likely to benefit from this treatment, led to a high proportion of screened subjects being judged ineligible, resulting in insufficient recruitment. There were difficulties in retrieving adherence data for the computerized home therapy procedures. Conclusions This study demonstrated that a 16-week treatment trial of vision therapy was feasible with respect to maintaining protocol adherence; however, recruitment under the proposed eligibility criteria, necessitated by the standardized approach to vision therapy, was not successful. A randomized clinical trial of in-office vision therapy for the treatment of amblyopia would require broadening of the eligibility criteria and improved methods to gather objective data regarding the home therapy. A more flexible approach that customizes vision therapy based on subject age, visual acuity, and stereopsis, might be required to allow enrollment of a broader group of subjects. PMID:23563444
Micro-Inspector Spacecraft for Space Exploration Missions
NASA Technical Reports Server (NTRS)
Mueller, Juergen; Alkalai, Leon; Lewis, Carol
2005-01-01
NASA is seeking to embark on a new set of human and robotic exploration missions back to the Moon, to Mars, and destinations beyond. Key strategic technical challenges will need to be addressed to realize this new vision for space exploration, including improvements in safety and reliability to improve robustness of space operations. Under sponsorship by NASA's Exploration Systems Mission, the Jet Propulsion Laboratory (JPL), together with its partners in government (NASA Johnson Space Center) and industry (Boeing, Vacco Industries, Ashwin-Ushas Inc.) is developing an ultra-low mass (<3.0 kg) free-flying micro-inspector spacecraft in an effort to enhance safety and reduce risk in future human and exploration missions. The micro-inspector will provide remote vehicle inspections to ensure safety and reliability, or to provide monitoring of in-space assembly. The micro-inspector spacecraft represents an inherently modular system addition that can improve safety and support multiple host vehicles in multiple applications. On human missions, it may help extend the reach of human explorers, decreasing human EVA time to reduce mission cost and risk. The micro-inspector development is the continuation of an effort begun under NASA's Office of Aerospace Technology Enabling Concepts and Technology (ECT) program. The micro-inspector uses miniaturized celestial sensors; relies on a combination of solar power and batteries (allowing for unlimited operation in the sun and up to 4 hours in the shade); utilizes a low-pressure, low-leakage liquid butane propellant system for added safety; and includes multi-functional structure for high system-level integration and miniaturization. Versions of this system to be designed and developed under the H&RT program will include additional capabilities for on-board, vision-based navigation, spacecraft inspection, and collision avoidance, and will be demonstrated in a ground-based, space-related environment. These features make the micro-inspector design unique in its ability to serve crewed as well as robotic spacecraft, well beyond Earth-orbit and into arenas such as robotic missions, where human teleoperation capability is not locally available.
Do bees like Van Gogh's Sunflowers?
NASA Astrophysics Data System (ADS)
Chittka, Lars; Walker, Julian
2006-06-01
Flower colours have evolved over 100 million years to address the colour vision of their bee pollinators. In a much more rapid process, cultural (and horticultural) evolution has produced images of flowers that stimulate aesthetic responses in human observers. The colour vision and analysis of visual patterns differ in several respects between humans and bees. Here, a behavioural ecologist and an installation artist present bumblebees with reproductions of paintings highly appreciated in Western society, such as Van Gogh's Sunflowers. We use this unconventional approach in the hope to raise awareness for between-species differences in visual perception, and to provoke thinking about the implications of biology in human aesthetics and the relationship between object representation and its biological connotations.
Vehicle-based vision sensors for intelligent highway systems
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1989-09-01
This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.
Kernelized Locality-Sensitive Hashing for Fast Image Landmark Association
2011-03-24
based Simultaneous Localization and Mapping ( SLAM ). The problem, however, is that vision-based navigation techniques can re- quire excessive amounts of...up and optimizing the data association process in vision-based SLAM . Specifically, this work studies the current methods that algorithms use to...required for location identification than that of other methods. This work can then be extended into a vision- SLAM implementation to subsequently
Code of Federal Regulations, 2010 CFR
2010-04-01
... benefits based on statutory blindness. (a) If your vision does not meet the definition of blindness. If you... statutory blindness has ended beginning with the earliest of the following months— (1) The month your vision... for a specified period of time in the past; (2) The month your vision based on current medical...
Code of Federal Regulations, 2011 CFR
2011-04-01
... benefits based on statutory blindness. (a) If your vision does not meet the definition of blindness. If you... statutory blindness has ended beginning with the earliest of the following months— (1) The month your vision... for a specified period of time in the past; (2) The month your vision based on current medical...
CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System
Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman
1991-01-01
Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...
Real-time skin feature identification in a time-sequential video stream
NASA Astrophysics Data System (ADS)
Kramberger, Iztok
2005-04-01
Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.
Converging technologies in higher education: paradigm for the "new" liberal arts?
Balmer, Robert T
2006-12-01
This article discusses the historic relationship between the practical arts (technology) and the mental (liberal) arts, suggesting that Converging Technologies is a new higher education paradigm that integrates the arts, humanities, and sciences with modern technology. It explains that the paradigm really includes all fields in higher education from philosophy to art to music to modern languages and beyond. To implement a transformation of this magnitude, it is necessary to understand the psychology of change in academia. Union College in Schenectady, New York, implemented a Converging Technologies Educational Paradigm in five steps: (1) create a compelling vision, (2) communicate the vision, (3) empower the faculty, (4) create short-term successes, and (5) institutionalize the results. This case study of Union College demonstrates it is possible to build a pillar of educational excellence based on Converging Technologies.
Novel Propulsion and Power Concepts for 21st Century Aviation
NASA Technical Reports Server (NTRS)
Sehra, Arun K.
2003-01-01
The air transportation for the new millennium will require revolutionary solutions to meeting public demand for improving safety, reliability, environmental compatibility, and affordability. NASA s vision for 21st Century Aircraft is to develop propulsion systems that are intelligent, virtually inaudible (outside the airport boundaries), and have near zero harmful emissions (CO2 and NO(x)). This vision includes intelligent engines that will be capable of adapting to changing internal and external conditions to optimally accomplish the mission with minimal human intervention. The distributed vectored propulsion will replace two to four wing mounted or fuselage mounted engines by a large number of small, mini, or micro engines. And the electric drive propulsion based on fuel cell power will generate electric power, which in turn will drive propulsors to produce the desired thrust. Such a system will completely eliminate the harmful emissions.
Toward detection of marine vehicles on horizon from buoy camera
NASA Astrophysics Data System (ADS)
Fefilatyev, Sergiy; Goldgof, Dmitry B.; Langebrake, Lawrence
2007-10-01
This paper presents a new technique for automatic detection of marine vehicles in open sea from a buoy camera system using computer vision approach. Users of such system include border guards, military, port safety and flow management, sanctuary protection personnel. The system is intended to work autonomously, taking images of the surrounding ocean surface and analyzing them on the subject of presence of marine vehicles. The goal of the system is to detect an approximate window around the ship and prepare the small image for transmission and human evaluation. The proposed computer vision-based algorithm combines horizon detection method with edge detection and post-processing. The dataset of 100 images is used to evaluate the performance of proposed technique. We discuss promising results of ship detection and suggest necessary improvements for achieving better performance.
Human Neurological Development: Past, Present and Future
NASA Technical Reports Server (NTRS)
Pelligra, R. (Editor)
1978-01-01
Neurological development is considered as the major human potential. Vision, vestibular function, intelligence, and nutrition are discussed as well as the treatment of neurological disfunctions, coma, and convulsive seizures.
Thresholds and noise limitations of colour vision in dim light
Yovanovich, Carola
2017-01-01
Colour discrimination is based on opponent photoreceptor interactions, and limited by receptor noise. In dim light, photon shot noise impairs colour vision, and in vertebrates, the absolute threshold of colour vision is set by dark noise in cones. Nocturnal insects (e.g. moths and nocturnal bees) and vertebrates lacking rods (geckos) have adaptations to reduce receptor noise and use chromatic vision even in very dim light. In contrast, vertebrates with duplex retinae use colour-blind rod vision when noisy cone signals become unreliable, and their transition from cone- to rod-based vision is marked by the Purkinje shift. Rod–cone interactions have not been shown to improve colour vision in dim light, but may contribute to colour vision in mesopic light intensities. Frogs and toads that have two types of rods use opponent signals from these rods to control phototaxis even at their visual threshold. However, for tasks such as prey or mate choice, their colour discrimination abilities fail at brighter light intensities, similar to other vertebrates, probably limited by the dark noise in cones. This article is part of the themed issue 'Vision in dim light’. PMID:28193810
Effects of light on brain and behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brainard, G.C.
1994-12-31
It is obvious that light entering the eye permits the sensory capacity of vision. The human species is highly dependent on visual perception of the environment and consequently, the scientific study of vision and visual mechanisms is a centuries old endeavor. Relatively new discoveries are now leading to an expanded understanding of the role of light entering the eye - in addition to supporting vision, light has various nonvisual biological effects. Over the past thirty years, animal studies have shown that environmental light is the primary stimulus for regulating circadian rhythms, seasonal cycles, and neuroendocrine responses. As with all photobiologicalmore » phenomena, the wavelength, intensity, timing and duration of a light stimulus is important in determining its regulatory influence on the circadian and neuroendocrine systems. Initially, the effects of light on rhythms and hormones were observed only in sub-human species. Research over the past decade, however, has confirmed that light entering the eyes of humans is a potent stimulus for controlling physiological rhythms. The aim of this paper is to examine three specific nonvisual responses in humans which are mediated by light entering the eye: light-induced melatonin suppression, light therapy for winter depression, and enhancement of nighttime performance. This will serve as a brief introduction to the growing database which demonstrates how light stimuli can influence physiology, mood and behavior in humans. Such information greatly expands our understanding of the human eye and will ultimately change our use of light in the human environment.« less
Effects of light on brain and behavior
NASA Technical Reports Server (NTRS)
Brainard, George C.
1994-01-01
It is obvious that light entering the eye permits the sensory capacity of vision. The human species is highly dependent on visual perception of the environment and consequently, the scientific study of vision and visual mechanisms is a centuries old endeavor. Relatively new discoveries are now leading to an expanded understanding of the role of light entering the eye in addition to supporting vision, light has various nonvisual biological effects. Over the past thirty years, animal studies have shown that environmental light is the primary stimulus for regulating circadian rhythms, seasonal cycles, and neuroendocrine responses. As with all photobiological phenomena, the wavelength, intensity, timing and duration of a light stimulus is important in determining its regulatory influence on the circadian and neuroendocrine systems. Initially, the effects of light on rhythms and hormones were observed only in sub-human species. Research over the past decade, however, has confirmed that light entering the eyes of humans is a potent stimulus for controlling physiological rhythms. The aim of this paper is to examine three specific nonvisual responses in humans which are mediated by light entering the eye: light-induced melatonin suppression, light therapy for winter depression, and enhancement of nighttime performance. This will serve as a brief introduction to the growing database which demonstrates how light stimuli can influence physiology, mood and behavior in humans. Such information greatly expands our understanding of the human eye and will ultimately change our use of light in the human environment.
Helmet-Mounted Displays: Sensation, Perception and Cognition Issues
2009-01-01
Inc., web site: http://www.metavr.com/ technology/ papers /syntheticvision.html Helmetag, A., Halbig, C., Kubbat, W., and Schmidt, R. (1999...system-of-systems.” One integral system is a “head-borne vision enhancement” system (an HMD) that provides fused I2/ IR sensor imagery (U.S. Army Natick...Using microwave, radar, I2, infrared ( IR ), and other technology-based imaging sensors, the “seeing” range of the human eye is extended into the
Low Earth Orbit Rendezvous Strategy for Lunar Missions
NASA Technical Reports Server (NTRS)
Cates, Grant R.; Cirillo, William M.; Stromgren, Chel
2006-01-01
On January 14, 2004 President George W. Bush announced a new Vision for Space Exploration calling for NASA to return humans to the moon. In 2005 NASA decided to use a Low Earth Orbit (LEO) rendezvous strategy for the lunar missions. A Discrete Event Simulation (DES) based model of this strategy was constructed. Results of the model were then used for subsequent analysis to explore the ramifications of the LEO rendezvous strategy.
Physiological and ecological implications of ocean deoxygenation for vision in marine organisms.
McCormick, Lillian R; Levin, Lisa A
2017-09-13
Climate change has induced ocean deoxygenation and exacerbated eutrophication-driven hypoxia in recent decades, affecting the physiology, behaviour and ecology of marine organisms. The high oxygen demand of visual tissues and the known inhibitory effects of hypoxia on human vision raise the questions if and how ocean deoxygenation alters vision in marine organisms. This is particularly important given the rapid loss of oxygen and strong vertical gradients in oxygen concentration in many areas of the ocean. This review evaluates the potential effects of low oxygen (hypoxia) on visual function in marine animals and their implications for marine biota under current and future ocean deoxygenation based on evidence from terrestrial and a few marine organisms. Evolutionary history shows radiation of eye designs during a period of increasing ocean oxygenation. Physiological effects of hypoxia on photoreceptor function and light sensitivity, in combination with morphological changes that may occur throughout ontogeny, have the potential to alter visual behaviour and, subsequently, the ecology of marine organisms, particularly for fish, cephalopods and arthropods with 'fast' vision. Visual responses to hypoxia, including greater light requirements, offer an alternative hypothesis for observed habitat compression and shoaling vertical distributions in visual marine species subject to ocean deoxygenation, which merits further investigation.This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'. © 2017 The Author(s).
NASA Utilization of the International Space Station and the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Thumm, Tracy L.; Thomas, Donald A.
2006-01-01
In response to the U.S. President s Vision for Space Exploration (January 14, 2004), NASA has revised its utilization plans for ISS to focus on (1) research on astronaut health and the development of countermeasures that will protect our crews from the space environment during long duration voyages, (2) ISS as a test bed for research and technology developments that will insure vehicle systems and operational practices are ready for future exploration missions, (3) developing and validating operational practices and procedures for long-duration space missions. In addition, NASA will continue a small amount of fundamental research in life and microgravity sciences. There have been significant research accomplishments that are important for achieving the Exploration Vision. Some of these have been formal research payloads, while others have come from research based on the operation of International Space Station (ISS). We will review a selection of these experiments and results, as well as outline some of ongoing and upcoming research. The ISS represents the only microgravity opportunity to perform on-orbit long-duration studies of human health and performance and technologies relevant for future long-duration missions planned during the next 25 years. Even as NASA focuses on developing the Orion spacecraft and return to the moon (2015-2020), research on and operation of the ISS is fundamental to the success of NASA s Exploration Vision.
NASA Utilization of the International Space Station and the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Thumm, Tracy L.; Thomas, Donald A.
2007-01-01
In response to the U.S. President s Vision for Space Exploration (January 14, 2004), NASA has revised its utilization plans for ISS to focus on (1) research on astronaut health and the development of countermeasures that will protect our crews from the space environment during long duration voyages, (2) ISS as a test bed for research and technology developments that will insure vehicle systems and operational practices are ready for future exploration missions, (3) developing and validating operational practices and procedures for long-duration space missions. In addition, NASA will continue a small amount of fundamental research in life and microgravity sciences. There have been significant research accomplishments that are important for achieving the Exploration Vision. Some of these have been formal research payloads, while others have come from research based on the operation of International Space Station (ISS). We will review a selection of these experiments and results, as well as outline some of ongoing and upcoming research. The ISS represents the only microgravity opportunity to perform on-orbit long-duration studies of human health and performance and technologies relevant for future long-duration missions planned during the next 25 years. Even as NASA focuses on developing the Orion spacecraft and return to the moon (2015-2020), research on and operation of the ISS is fundamental to the success of NASA s Exploration Vision.
NASA Utilization of the International Space Station and the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Thomas, Donald A.; Thumm, Tracy L.
2006-01-01
In response to the U.S. President's Vision for Space Exploration (January 14, 2004), NASA has revised its utilization plans for ISS to focus on (1) research on astronaut health and the development of countermeasures that will protect our crews from the space environment during long duration voyages, (2) ISS as a test bed for research and technology developments that will insure vehicle systems and operational practices are ready for future exploration missions, (3) developing and validating operational practices and procedures for long-duration space missions. In addition, NASA will continue a small amount of fundamental research in life and microgravity sciences. There have been significant research accomplishments that are important for achieving the Exploration Vision. Some of these have been formal research payloads, while others have come from research based on the operation of International Space Station (ISS). We will review a selection of these experiments and results, as well as outline some of ongoing and upcoming research. The ISS represents the only microgravity opportunity to perform on-orbit long-duration studies of human health and performance and technologies relevant for future long-duration missions planned during the next 25 years. Even as NASA focuses on developing the Orion spacecraft and return to the moon (2015-2020), research on and operation of the ISS is fundamental to the success of NASA s Exploration Vision.
A pseudoisochromatic test of color vision for human infants.
Mercer, Michele E; Drodge, Suzanne C; Courage, Mary L; Adams, Russell J
2014-07-01
Despite the development of experimental methods capable of measuring early human color vision, we still lack a procedure comparable to those used to diagnose the well-identified congenital and acquired color vision anomalies in older children, adults, and clinical patients. In this study, we modified a pseudoisochromatic test to make it more suitable for young infants. Using a forced choice preferential looking procedure, 216 3-to-23-mo-old babies were tested with pseudoisochromatic targets that fell on either a red/green or a blue/yellow dichromatic confusion axis. For comparison, 220 color-normal adults and 22 color-deficient adults were also tested. Results showed that all babies and adults passed the blue/yellow target but many of the younger infants failed the red/green target, likely due to the interaction of the lingering immaturities within the visual system and the small CIE vector distance within the red/green plate. However, older (17-23 mo) infants, color- normal adults and color-defective adults all performed according to expectation. Interestingly, performance on the red/green plate was better among female infants, well exceeding the expected rate of genetic dimorphism between genders. Overall, with some further modification, the test serves as a promising tool for the detection of early color vision anomalies in early human life. Copyright © 2014 Elsevier B.V. All rights reserved.
Human-Robot Control Strategies for the NASA/DARPA Robonaut
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.
2003-01-01
The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.
Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan
2016-04-22
The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.
Human body motion tracking based on quantum-inspired immune cloning algorithm
NASA Astrophysics Data System (ADS)
Han, Hong; Yue, Lichuan; Jiao, Licheng; Wu, Xing
2009-10-01
In a static monocular camera system, to gain a perfect 3D human body posture is a great challenge for Computer Vision technology now. This paper presented human postures recognition from video sequences using the Quantum-Inspired Immune Cloning Algorithm (QICA). The algorithm included three parts. Firstly, prior knowledge of human beings was used, the key joint points of human could be detected automatically from the human contours and skeletons which could be thinning from the contours; And due to the complexity of human movement, a forecasting mechanism of occlusion joint points was addressed to get optimum 2D key joint points of human body; And then pose estimation recovered by optimizing between the 2D projection of 3D human key joint points and 2D detection key joint points using QICA, which recovered the movement of human body perfectly, because this algorithm could acquire not only the global optimal solution, but the local optimal solution.
Knowledge-based machine vision systems for space station automation
NASA Technical Reports Server (NTRS)
Ranganath, Heggere S.; Chipman, Laure J.
1989-01-01
Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.
NASA Astrophysics Data System (ADS)
Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.
2018-04-01
Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.
NASA Astrophysics Data System (ADS)
Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang
2012-01-01
The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.
A computational visual saliency model based on statistics and machine learning.
Lin, Ru-Je; Lin, Wei-Song
2014-08-01
Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.
Assessing commercial feasibility: a practical and ethical prerequisite for human clinical testing.
Kirk, Dwayne D; Robert, Jason Scott
2005-01-01
This article proposes that an assessment of commercial feasibility should be integrated as a prerequisite for human clinical testing to improve the quality and relevance of materials being investigated, as an ethical aspect for human subject protection, and as a means of improving accountability where clinical development is funded on promises of successful translational research. A commercial feasibility analysis is not currently required to justify human clinical testing, but is assumed to have been conducted by industry participants, and use of public funds for clinical trials should be defensible in the same manner. Plant-made vaccines (PMVs) are offered in this discussion as a model for evaluating the relevance of commercial feasibility before human clinical testing. PMVs have been proposed as a potential solution for global health, based on a vision of immunizing the world against many infectious diseases. Such a vision depends on translating current knowledge in plant science and immunology into a potent vaccine that can be readily manufactured and distributed to those in need. But new biologics such as PMVs may fail to be manufactured due to financial or logistical reasons--particularly for orphan diseases without sufficient revenue incentive for industry investment--regardless of the effectiveness which might be demonstrated in human clinical testing. Moreover, all potential instruments of global health depend on translational agents well beyond the lab in order to reach those in need. A model compromising five criteria for commercial feasibility is suggested for inclusion by regulators and ethics review boards as part of the review process prior to approval of human clinical testing. Use of this model may help to facilitate safe and appropriate translational research and bring more immediate benefits to those in need.
The New Face in Leadership: Emotional Intelligence
ERIC Educational Resources Information Center
Greenockle, Karen M.
2010-01-01
In the new millennium we are witnessing the shifts in the global economy, competition, and human resource needs thus, requiring a leader who must have more than a vision that inspires others but be able to execute it successfully to ensure the vision becomes a reality (Dunning, 2000). This emphasis on execution requires a reliance on teamwork and…
B. F. Skinner's Utopian Vision: Behind and beyond "Walden Two"
ERIC Educational Resources Information Center
Altus, Deborah E.; Morris, Edward K.
2009-01-01
This paper addresses B. F. Skinner's utopian vision for enhancing social justice and human wellbeing in his 1948 novel, "Walden Two." In the first part, we situate the book in its historical, intellectual, and social context of the utopian genre, address critiques of the book's premises and practices, and discuss the fate of intentional…
Jordan Reforms Public Education to Compete in a Global Economy
ERIC Educational Resources Information Center
Erickson, Paul W.
2009-01-01
The King of Jordan's vision for education is resulting in innovative projects for the country. King Abdullah II wants Jordan to develop its human resources through public education to equip the workforce with skills for the future. From King Abdullah II's vision, the Education Reform for a Knowledge Economy (ERfKE) project implemented by the…
76 FR 55321 - CooperVision, Inc.; Filing of Color Additive Petitions
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-07
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 73 [Docket Nos. FDA-2011-C-0344 and FDA-2011-C-0463] CooperVision, Inc.; Filing of Color Additive Petitions Correction In proposed rule document C1-2011-16089 appearing on page 49707 in the issue of Thursday, August 11...
A Shared Vision of Human Excellence: Confucian Spirituality and Arts Education
ERIC Educational Resources Information Center
Tan, Charlene; Tan, Leonard
2016-01-01
Spirituality encourages the individual to make sense of oneself within a wider framework of meaning and see oneself as part of some larger whole. This article discusses Confucian spirituality by focusing on the spiritual ideals of "dao" (Way) and "he" (harmony). It is explained that the Way represents a shared vision of human…
Study of the Peculiarities of Color Vision in the Course of "Biophysics" in a Pedagogical University
ERIC Educational Resources Information Center
Petrova, Elena Borisovna; Sabirova, Fairuza Musovna
2016-01-01
The article substantiates the necessity of studying the peculiarities of color vision of human in the course "Biophysics" that have been integrated into many types of higher education institutions. It describes the experience of teaching this discipline in a pedagogical higher education institution. The article presents a brief review of…
Krueger, Ronald R; Uy, Harvey; McDonald, Jared; Edwards, Keith
2012-12-01
To demonstrate that ultrashort-pulse laser treatment in the crystalline lens does not form a focal, progressive, or vision-threatening cataract. An Nd:vanadate picosecond laser (10 ps) with prototype delivery system was used. Primates: 11 rhesus monkey eyes were prospectively treated at the University of Wisconsin (energy 25-45 μJ/pulse and 2.0-11.3M pulses per lens). Analysis of lens clarity and fundus imaging was assessed postoperatively for up to 4½ years (5 eyes). Humans: 80 presbyopic patients were prospectively treated in one eye at the Asian Eye Institute in the Philippines (energy 10 μJ/pulse and 0.45-1.45M pulses per lens). Analysis of lens clarity, best-corrected visual acuity, and subjective symptoms was performed at 1 month, prior to elective lens extraction. Bubbles were immediately seen, with resolution within the first 24 to 48 hours. Afterwards, the laser pattern could be seen with faint, noncoalescing, pinpoint micro-opacities in both primate and human eyes. In primates, long-term follow-up at 4½ years showed no focal or progressive cataract, except in 2 eyes with preexisting cataract. In humans, <25% of patients with central sparing (0.75 and 1.0 mm radius) lost 2 or more lines of best spectacle-corrected visual acuity at 1 month, and >70% reported acceptable or better distance vision and no or mild symptoms. Meanwhile, >70% without sparing (0 and 0.5 mm radius) lost 2 or more lines, and most reported poor or severe vision and symptoms. Focal, progressive, and vision-threatening cataracts can be avoided by lowering the laser energy, avoiding prior cataract, and sparing the center of the lens.
Krueger, Ronald R.; Uy, Harvey; McDonald, Jared; Edwards, Keith
2012-01-01
Purpose: To demonstrate that ultrashort-pulse laser treatment in the crystalline lens does not form a focal, progressive, or vision-threatening cataract. Methods: An Nd:vanadate picosecond laser (10 ps) with prototype delivery system was used. Primates: 11 rhesus monkey eyes were prospectively treated at the University of Wisconsin (energy 25–45 μJ/pulse and 2.0–11.3M pulses per lens). Analysis of lens clarity and fundus imaging was assessed postoperatively for up to 4½ years (5 eyes). Humans: 80 presbyopic patients were prospectively treated in one eye at the Asian Eye Institute in the Philippines (energy 10 μJ/pulse and 0.45–1.45M pulses per lens). Analysis of lens clarity, best-corrected visual acuity, and subjective symptoms was performed at 1 month, prior to elective lens extraction. Results: Bubbles were immediately seen, with resolution within the first 24 to 48 hours. Afterwards, the laser pattern could be seen with faint, noncoalescing, pinpoint micro-opacities in both primate and human eyes. In primates, long-term follow-up at 4½ years showed no focal or progressive cataract, except in 2 eyes with preexisting cataract. In humans, <25% of patients with central sparing (0.75 and 1.0 mm radius) lost 2 or more lines of best spectacle-corrected visual acuity at 1 month, and >70% reported acceptable or better distance vision and no or mild symptoms. Meanwhile, >70% without sparing (0 and 0.5 mm radius) lost 2 or more lines, and most reported poor or severe vision and symptoms. Conclusions: Focal, progressive, and vision-threatening cataracts can be avoided by lowering the laser energy, avoiding prior cataract, and sparing the center of the lens. PMID:23818739
Sharpening vision by adapting to flicker.
Arnold, Derek H; Williams, Jeremy D; Phipps, Natasha E; Goodale, Melvyn A
2016-11-01
Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision-allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because "blur" signals are mitigated.
Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles
Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger
2016-01-01
Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption. PMID:26978365
Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles.
Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger
2016-03-11
Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.
Ivanov, Iliya V; Leitritz, Martin A; Norrenberg, Lars A; Völker, Michael; Dynowski, Marek; Ueffing, Marius; Dietter, Johannes
2016-02-01
Abnormalities of blood vessel anatomy, morphology, and ratio can serve as important diagnostic markers for retinal diseases such as AMD or diabetic retinopathy. Large cohort studies demand automated and quantitative image analysis of vascular abnormalities. Therefore, we developed an analytical software tool to enable automated standardized classification of blood vessels supporting clinical reading. A dataset of 61 images was collected from a total of 33 women and 8 men with a median age of 38 years. The pupils were not dilated, and images were taken after dark adaption. In contrast to current methods in which classification is based on vessel profile intensity averages, and similar to human vision, local color contrast was chosen as a discriminator to allow artery vein discrimination and arterial-venous ratio (AVR) calculation without vessel tracking. With 83% ± 1 standard error of the mean for our dataset, we achieved best classification for weighted lightness information from a combination of the red, green, and blue channels. Tested on an independent dataset, our method reached 89% correct classification, which, when benchmarked against conventional ophthalmologic classification, shows significantly improved classification scores. Our study demonstrates that vessel classification based on local color contrast can cope with inter- or intraimage lightness variability and allows consistent AVR calculation. We offer an open-source implementation of this method upon request, which can be integrated into existing tool sets and applied to general diagnostic exams.
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
Exemplar-based human action pose correction.
Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen
2014-07-01
The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.
Robust Indoor Human Activity Recognition Using Wireless Signals.
Wang, Yi; Jiang, Xinli; Cao, Rongyu; Wang, Xiyang
2015-07-15
Wireless signals-based activity detection and recognition technology may be complementary to the existing vision-based methods, especially under the circumstance of occlusions, viewpoint change, complex background, lighting condition change, and so on. This paper explores the properties of the channel state information (CSI) of Wi-Fi signals, and presents a robust indoor daily human activity recognition framework with only one pair of transmission points (TP) and access points (AP). First of all, some indoor human actions are selected as primitive actions forming a training set. Then, an online filtering method is designed to make actions' CSI curves smooth and allow them to contain enough pattern information. Each primitive action pattern can be segmented from the outliers of its multi-input multi-output (MIMO) signals by a proposed segmentation method. Lastly, in online activities recognition, by selecting proper features and Support Vector Machine (SVM) based multi-classification, activities constituted by primitive actions can be recognized insensitive to the locations, orientations, and speeds.
Accurate estimation of human body orientation from RGB-D sensors.
Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao
2013-10-01
Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Kleinmann, Johanna; Wueller, Dietmar
2007-01-01
Since the signal to noise measuring method as standardized in the normative part of ISO 15739:2002(E)1 does not quantify noise in a way that matches the perception of the human eye, two alternative methods have been investigated which may be appropriate to quantify the noise perception in a physiological manner: - the model of visual noise measurement proposed by Hung et al2 (as described in the informative annex of ISO 15739:20021) which tries to simulate the process of human vision by using the opponent space and contrast sensitivity functions and uses the CIEL*u*v*1976 colour space for the determination of a so called visual noise value. - The S-CIELab model and CIEDE2000 colour difference proposed by Fairchild et al 3 which simulates human vision approximately the same way as Hung et al2 but uses an image comparison afterwards based on CIEDE2000. With a psychophysical experiment based on just noticeable difference (JND), threshold images could be defined, with which the two approaches mentioned above were tested. The assumption is that if the method is valid, the different threshold images should get the same 'noise value'. The visual noise measurement model results in similar visual noise values for all the threshold images. The method is reliable to quantify at least the JND for noise in uniform areas of digital images. While the visual noise measurement model can only evaluate uniform colour patches in images, the S-CIELab model can be used on images with spatial content as well. The S-CIELab model also results in similar colour difference values for the set of threshold images, but with some limitations: for images which contain spatial structures besides the noise, the colour difference varies depending on the contrast of the spatial content.
Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena
2013-01-01
In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.
A note on image degradation, disability glare, and binocular vision
NASA Astrophysics Data System (ADS)
Rajaram, Vandana; Lakshminarayanan, Vasudevan
2013-08-01
Disability glare due to scattering of light causes a reduction in visual performance due to a luminous veil over the scene. This causes problem such as contrast detection. In this note, we report a study of the effect of this veiling luminance on human stereoscopic vision. We measured the effect of glare on the horopter measured using the apparent fronto-parallel plane (AFPP) criterion. The empirical longitudinal horopter measured using the AFPP criterion was analyzed using the so-called analytic plot. The analytic plot parameters were used for quantitative measurement of binocular vision. Image degradation plays a major effect on binocular vision as measured by the horopter. Under the conditions tested, it appears that if vision is sufficiently degraded then the addition of disability glare does not seem to significantly cause any further compromise in depth perception as measured by the horopter.
Oestrogen, ocular function and low-level vision: a review.
Hutchinson, Claire V; Walker, James A; Davidson, Colin
2014-11-01
Over the past 10 years, a literature has emerged concerning the sex steroid hormone oestrogen and its role in human vision. Herein, we review evidence that oestrogen (oestradiol) levels may significantly affect ocular function and low-level vision, particularly in older females. In doing so, we have examined a number of vision-related disorders including dry eye, cataract, increased intraocular pressure, glaucoma, age-related macular degeneration and Leber's hereditary optic neuropathy. In each case, we have found oestrogen, or lack thereof, to have a role. We have also included discussion of how oestrogen-related pharmacological treatments for menopause and breast cancer can impact the pathology of the eye and a number of psychophysical aspects of vision. Finally, we have reviewed oestrogen's pharmacology and suggest potential mechanisms underlying its beneficial effects, with particular emphasis on anti-apoptotic and vascular effects. © 2014 Society for Endocrinology.
Zhai, Yi; Wang, Yan; Wang, Zhaoqi; Liu, Yongji; Zhang, Lin; He, Yuanqing; Chang, Shengjiang
2014-01-01
An achromatic element eliminating only longitudinal chromatic aberration (LCA) while maintaining transverse chromatic aberration (TCA) is established for the eye model, which involves the angle formed by the visual and optical axis. To investigate the impacts of higher-order aberrations on vision, the actual data of higher-order aberrations of human eyes with three typical levels are introduced into the eye model along visual axis. Moreover, three kinds of individual eye models are established to investigate the impacts of higher-order aberrations, chromatic aberration (LCA+TCA), LCA and TCA on vision under the photopic condition, respectively. Results show that for most human eyes, the impact of chromatic aberration on vision is much stronger than that of higher-order aberrations, and the impact of LCA in chromatic aberration dominates. The impact of TCA is approximately equal to that of normal level higher-order aberrations and it can be ignored when LCA exists.
Combining path integration and remembered landmarks when navigating without vision.
Kalia, Amy A; Schrater, Paul R; Legge, Gordon E
2013-01-01
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.
Combining Path Integration and Remembered Landmarks When Navigating without Vision
Kalia, Amy A.; Schrater, Paul R.; Legge, Gordon E.
2013-01-01
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. PMID:24039742
Computer vision syndrome (CVS) - Thermographic Analysis
NASA Astrophysics Data System (ADS)
Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.
2017-01-01
The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.
Translating Vision into Design: A Method for Conceptual Design Development
NASA Technical Reports Server (NTRS)
Carpenter, Joyce E.
2003-01-01
One of the most challenging tasks for engineers is the definition of design solutions that will satisfy high-level strategic visions and objectives. Even more challenging is the need to demonstrate how a particular design solution supports the high-level vision. This paper describes a process and set of system engineering tools that have been used at the Johnson Space Center to analyze and decompose high-level objectives for future human missions into design requirements that can be used to develop alternative concepts for vehicles, habitats, and other systems. Analysis and design studies of alternative concepts and approaches are used to develop recommendations for strategic investments in research and technology that support the NASA Integrated Space Plan. In addition to a description of system engineering tools, this paper includes a discussion of collaborative design practices for human exploration mission architecture studies used at the Johnson Space Center.
Ventegodt, Søren; Hermansen, Tyge Dahl; Flensborg-Madsen, Trine; Rald, Erik; Nielsen, Maj Lyck; Merrick, Joav
2006-01-01
In this paper we look at the rational and the emotional interpretation of reality in the human brain and being, and discuss the representation of the brain-mind (ego), the body-mind (Id), and the outer world in the human wholeness (the I or “soul”). Based on this we discuss a number of factors including the coherence between perception, attention and consciousness, and the relation between thought, fantasies, visions and dreams. We discuss and explain concepts as intent, will, morals and ethics. The Jungian concept of the human collective conscious and unconscious is also analyzed. We also hypothesis on the nature of intuition and consider the source of religious experience of man. These phenomena are explained based on the concept of deep quantum chemistry and infinite dancing fractal spirals making up the energetic backbone of the world. In this paper we consider man as a real wholeness and debate the concepts of subjectivity, consciousness and intent that can be deduced from such a perspective. PMID:17115085
Overload and Boredom: When the Humanities Turn to Noise.
ERIC Educational Resources Information Center
Walter, James A.
This paper argues that the current debate over humanities curricula has failed to articulate a vision for humanities education because it has turned on a liberal/conservative axis. It also contends that educators must stop debating elite versus democratic values or traditional versus contemporary problems in humanities education, and start…
ERIC Educational Resources Information Center
Rees, G.; Saw, C.; Larizza, M.; Lamoureux, E.; Keeffe, J.
2007-01-01
This qualitative study investigates the views of clients with low vision and vision rehabilitation professionals on the involvement of family and friends in group-based rehabilitation programs. Both groups outlined advantages and disadvantages to involving significant others, and it is essential that clients are given the choice. Future work is…
Vision-based coaching: optimizing resources for leader development
Passarelli, Angela M.
2015-01-01
Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader’s development fail to leverage the benefits of the individual’s personal vision. Drawing on intentional change theory, this article postulates that coaching interactions that emphasize a leader’s personal vision (future aspirations and core identity) evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader’s identity, increased vitality, activation of learning goals, and a promotion–orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed. PMID:25926803
The Potential of Human Stem Cells for the Study and Treatment of Glaucoma
Chamling, Xitiz; Sluch, Valentin M.; Zack, Donald J.
2016-01-01
Purpose Currently, the only available and approved treatments for glaucoma are various pharmacologic, laser-based, and surgical procedures that lower IOP. Although these treatments can be effective, they are not always sufficient, and they cannot restore vision that has already been lost. The goal of this review is to briefly assess current developments in the application of stem cell biology to the study and treatment of glaucoma and other forms of optic neuropathy. Methods A combined literature review and summary of the glaucoma-related discussion at the 2015 “Sight Restoration Through Stem Cell Therapy” meeting that was sponsored by the Ocular Research Symposia Foundation (ORSF). Results Ongoing advancements in basic and eye-related developmental biology have enabled researchers to direct murine and human stem cells along specific developmental paths and to differentiate them into a variety of ocular cell types of interest. The most advanced of these efforts involve the differentiation of stem cells into retinal pigment epithelial cells, work that has led to the initiation of several human trials. More related to the glaucoma field, there have been recent advances in developing protocols for differentiation of stem cells into trabecular meshwork and retinal ganglion cells. Additionally, efforts are being made to generate stem cell–derived cells that can be used to secrete neuroprotective factors. Conclusions Advancing stem cell technology provides opportunities to improve our understanding of glaucoma-related biology and develop models for drug development, and offers the possibility of cell-based therapies to restore sight to patients who have already lost vision. PMID:27116666
Human Pose Estimation from Monocular Images: A Comprehensive Survey
Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi
2016-01-01
Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003
Stanford/NASA-Ames Center of Excellence in model-based human performance
NASA Technical Reports Server (NTRS)
Wandell, Brian A.
1990-01-01
The human operator plays a critical role in many aeronautic and astronautic missions. The Stanford/NASA-Ames Center of Excellence in Model-Based Human Performance (COE) was initiated in 1985 to further our understanding of the performance capabilities and performance limits of the human component of aeronautic and astronautic projects. Support from the COE is devoted to those areas of experimental and theoretical work designed to summarize and explain human performance by developing computable performance models. The ultimate goal is to make these computable models available to other scientists for use in design and evaluation of aeronautic and astronautic instrumentation. Within vision science, two topics have received particular attention. First, researchers did extensive work analyzing the human ability to recognize object color relatively independent of the spectral power distribution of the ambient lighting (color constancy). The COE has supported a number of research papers in this area, as well as the development of a substantial data base of surface reflectance functions, ambient illumination functions, and an associated software package for rendering and analyzing image data with respect to these spectral functions. Second, the COE supported new empirical studies on the problem of selecting colors for visual display equipment to enhance human performance in discrimination and recognition tasks.
DOT National Transportation Integrated Search
2010-01-01
What is Vision Hampton Roads? : Vision Hampton Roads is... : A regionwide economic development strategy based on the collective strengths of all : localities of Hampton Roads, created with the input of business, academia, nonprofits, : government,...
Thresholds and noise limitations of colour vision in dim light.
Kelber, Almut; Yovanovich, Carola; Olsson, Peter
2017-04-05
Colour discrimination is based on opponent photoreceptor interactions, and limited by receptor noise. In dim light, photon shot noise impairs colour vision, and in vertebrates, the absolute threshold of colour vision is set by dark noise in cones. Nocturnal insects (e.g. moths and nocturnal bees) and vertebrates lacking rods (geckos) have adaptations to reduce receptor noise and use chromatic vision even in very dim light. In contrast, vertebrates with duplex retinae use colour-blind rod vision when noisy cone signals become unreliable, and their transition from cone- to rod-based vision is marked by the Purkinje shift. Rod-cone interactions have not been shown to improve colour vision in dim light, but may contribute to colour vision in mesopic light intensities. Frogs and toads that have two types of rods use opponent signals from these rods to control phototaxis even at their visual threshold. However, for tasks such as prey or mate choice, their colour discrimination abilities fail at brighter light intensities, similar to other vertebrates, probably limited by the dark noise in cones.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).
The adaptive value of primate color vision for predator detection.
Pessoa, Daniel Marques Almeida; Maia, Rafael; de Albuquerque Ajuz, Rafael Cavalcanti; De Moraes, Pedro Zurvaino Palmeira Melo Rosa; Spyrides, Maria Helena Constantino; Pessoa, Valdir Filgueiras
2014-08-01
The complex evolution of primate color vision has puzzled biologists for decades. Primates are the only eutherian mammals that evolved an enhanced capacity for discriminating colors in the green-red part of the spectrum (trichromatism). However, while Old World primates present three types of cone pigments and are routinely trichromatic, most New World primates exhibit a color vision polymorphism, characterized by the occurrence of trichromatic and dichromatic females and obligatory dichromatic males. Even though this has stimulated a prolific line of inquiry, the selective forces and relative benefits influencing color vision evolution in primates are still under debate, with current explanations focusing almost exclusively at the advantages in finding food and detecting socio-sexual signals. Here, we evaluate a previously untested possibility, the adaptive value of primate color vision for predator detection. By combining color vision modeling data on New World and Old World primates, as well as behavioral information from human subjects, we demonstrate that primates exhibiting better color discrimination (trichromats) excel those displaying poorer color visions (dichromats) at detecting carnivoran predators against the green foliage background. The distribution of color vision found in extant anthropoid primates agrees with our results, and may be explained by the advantages of trichromats and dichromats in detecting predators and insects, respectively. © 2014 Wiley Periodicals, Inc.
Temporal dynamics of figure-ground segregation in human vision.
Neri, Peter; Levi, Dennis M
2007-01-01
The segregation of figure from ground is arguably one of the most fundamental operations in human vision. Neural signals reflecting this operation appear in cortex as early as 50 ms and as late as 300 ms after presentation of a visual stimulus, but it is not known when these signals are used by the brain to construct the percepts of figure and ground. We used psychophysical reverse correlation to identify the temporal window for figure-ground signals in human perception and found it to lie within the range of 100-160 ms. Figure enhancement within this narrow temporal window was transient rather than sustained as may be expected from measurements in single neurons. These psychophysical results prompt and guide further electrophysiological studies.
A nationwide population-based study of low vision and blindness in South Korea.
Park, Shin Hae; Lee, Ji Sung; Heo, Hwan; Suh, Young-Woo; Kim, Seung-Hyun; Lim, Key Hwan; Moon, Nam Ju; Lee, Sung Jin; Park, Song Hee; Baek, Seung-Hee
2014-12-18
To investigate the prevalence and associated risk factors of low vision and blindness in the Korean population. This cross-sectional, population-based study examined the ophthalmologic data of 22,135 Koreans aged ≥5 years from the fifth Korea National Health and Nutrition Examination Survey (KNHANES V, 2010-2012). According to the World Health Organization criteria, blindness was defined as visual acuity (VA) less than 20/400 in the better-seeing eye, and low vision as VA of 20/60 or worse but 20/400 or better in the better-seeing eye. The prevalence rates were calculated from either presenting VA (PVA) or best-corrected VA (BCVA). Multivariate regression analysis was conducted for adults aged ≥20 years. The overall prevalence rates of PVA-defined low vision and blindness were 4.98% and 0.26%, respectively, and those of BCVA-defined low vision and blindness were 0.46% and 0.05%, respectively. Prevalence increased rapidly above the age of 70 years. For subjects aged ≥70 years, the population-weighted prevalence rates of low vision, based on PVA and BCVA, were 12.85% and 3.87%, respectively, and the corresponding rates of blindness were 0.49% and 0.42%, respectively. The presenting vision problems were significantly associated with age (younger adults or elderly subjects), female sex, low educational level, and lowest household income, whereas the best-corrected vision problems were associated with age ≥ 70 years, a low educational level, and rural residence. This population-based study provides useful information for planning optimal public eye health care services in South Korea. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
[Geertz' Interpretive Theory and care management: visualizing nurses' social practice].
Prochnow, Adelina Giacomelli; Leite, Joséte Luzia; Erdmann, Alacoque Lorenzini
2005-01-01
This paper presents a theoretical reflection on hospital nursing care management and Geertz' Interpretive Theory of Culture. We discuss some significant elements of culture in management, based on the theoretical reference frameworks of nursing, administration and anthropology. In these, the importance of cultural diversity is highlighted as an innovative resource to expand the vision of human integrity, valuing divergences, respect and sharing, which are important for nurses in the construction of their social practice.
Atmosphere Revitalization Technology Development for Crewed Space Exploration
NASA Technical Reports Server (NTRS)
Perry, Jay L.; Carrasquillo, Robyn L.; Harris, Danny W.
2006-01-01
As space exploration objectives extend human presence beyond low Earth orbit, the solutions to technological challenges presented by supporting human life in the hostile space environment must build upon experience gained during past and present crewed space exploration programs. These programs and the cabin atmosphere revitalization process technologies and systems developed for them represent the National Aeronautics and Space Administration s (NASA) past and present operational knowledge base for maintaining a safe, comfortable environment for the crew. The contributions of these programs to the NASA s technological and operational working knowledge base as well as key strengths and weaknesses to be overcome are discussed. Areas for technological development to address challenges inherent with the Vision for Space Exploration (VSE) are presented and a plan for their development employing unit operations principles is summarized
Vision and Leadership: Problem-Based Learning as a Teaching Tool
ERIC Educational Resources Information Center
Archbald, Douglas
2013-01-01
We read and hear frequently about the role of vision in leadership. Standards for leadership education programs typically emphasize vision as a core component of leadership education and published accounts of successful leadership usually extol the leader's vision. Given the prevalence of this term in discourse on leadership, it is surprising how…
The Vision Thing in Higher Education.
ERIC Educational Resources Information Center
Keller, George
1995-01-01
It is argued that while the concept of "vision" in higher education has been met with disdain, criticism is based on misconceptions of vision's nature and role--that vision requires a charismatic administrator and that visionaries are dreamers. Educators and planners are urged to use imaginative thinking to connect the institution's and staff's…
Weberized Mumford-Shah Model with Bose-Einstein Photon Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Jianhong, E-mail: jhshen@math.umn.edu; Jung, Yoon-Mo
Human vision works equally well in a large dynamic range of light intensities, from only a few photons to typical midday sunlight. Contributing to such remarkable flexibility is a famous law in perceptual (both visual and aural) psychology and psychophysics known as Weber's Law. The current paper develops a new segmentation model based on the integration of Weber's Law and the celebrated Mumford-Shah segmentation model (Comm. Pure Appl. Math., vol. 42, pp. 577-685, 1989). Explained in detail are issues concerning why the classical Mumford-Shah model lacks light adaptivity, and why its 'weberized' version can more faithfully reflect human vision's superiormore » segmentation capability in a variety of illuminance conditions from dawn to dusk. It is also argued that the popular Gaussian noise model is physically inappropriate for the weberization procedure. As a result, the intrinsic thermal noise of photon ensembles is introduced based on Bose and Einstein's distributions in quantum statistics, which turns out to be compatible with weberization both analytically and computationally. The current paper focuses on both the theory and computation of the weberized Mumford-Shah model with Bose-Einstein noise. In particular, Ambrosio-Tortorelli's {gamma}-convergence approximation theory is adapted (Boll. Un. Mat. Ital. B, vol. 6, pp. 105-123, 1992), and stable numerical algorithms are developed for the associated pair ofnonlinear Euler-Lagrange PDEs.« less
Hypothesis on human eye perceiving optical spectrum rather than an image
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Szu, Harold
2015-05-01
It is a common knowledge that we see the world because our eyes can perceive an optical image. A digital camera seems a good example of simulating the eye imaging system. However, the signal sensing and imaging on human retina is very complicated. There are at least five layers (of neurons) along the signal pathway: photoreceptors (cones and rods), bipolar, horizontal, amacrine and ganglion cells. To sense an optical image, it seems that photoreceptors (as sensors) plus ganglion cells (converting to electrical signals for transmission) are good enough. Image sensing does not require ununiformed distribution of photoreceptors like fovea. There are some challenging questions, for example, why don't we feel the "blind spots" (never fibers exiting the eyes)? Similar situation happens to glaucoma patients who do not feel their vision loss until 50% or more nerves died. Now our hypothesis is that human retina initially senses optical (i.e., Fourier) spectrum rather than optical image. Due to the symmetric property of Fourier spectrum the signal loss from a blind spot or the dead nerves (for glaucoma patients) can be recovered. Eye logarithmic response to input light intensity much likes displaying Fourier magnitude. The optics and structures of human eyes satisfy the needs of optical Fourier spectrum sampling. It is unsure that where and how inverse Fourier transform is performed in human vision system to obtain an optical image. Phase retrieval technique in compressive sensing domain enables image reconstruction even without phase inputs. The spectrum-based imaging system can potentially tolerate up to 50% of bad sensors (pixels), adapt to large dynamic range (with logarithmic response), etc.
Assessment of OLED displays for vision research
Cooper, Emily A.; Jiang, Haomiao; Vildavski, Vladimir; Farrell, Joyce E.; Norcia, Anthony M.
2013-01-01
Vision researchers rely on visual display technology for the presentation of stimuli to human and nonhuman observers. Verifying that the desired and displayed visual patterns match along dimensions such as luminance, spectrum, and spatial and temporal frequency is an essential part of developing controlled experiments. With cathode-ray tubes (CRTs) becoming virtually unavailable on the commercial market, it is useful to determine the characteristics of newly available displays based on organic light emitting diode (OLED) panels to determine how well they may serve to produce visual stimuli. This report describes a series of measurements summarizing the properties of images displayed on two commercially available OLED displays: the Sony Trimaster EL BVM-F250 and PVM-2541. The results show that the OLED displays have large contrast ratios, wide color gamuts, and precise, well-behaved temporal responses. Correct adjustment of the settings on both models produced luminance nonlinearities that were well predicted by a power function (“gamma correction”). Both displays have adjustable pixel independence and can be set to have little to no spatial pixel interactions. OLED displays appear to be a suitable, or even preferable, option for many vision research applications. PMID:24155345
Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications
Li, Guoqiang; Mathine, David L.; Valley, Pouria; Äyräs, Pekka; Haddock, Joshua N.; Giridhar, M. S.; Williby, Gregory; Schwiegerling, Jim; Meredith, Gerald R.; Kippelen, Bernard; Honkanen, Seppo; Peyghambarian, Nasser
2006-01-01
Presbyopia is an age-related loss of accommodation of the human eye that manifests itself as inability to shift focus from distant to near objects. Assuming no refractive error, presbyopes have clear vision of distant objects; they require reading glasses for viewing near objects. Area-divided bifocal lenses are one example of a treatment for this problem. However, the field of view is limited in such eyeglasses, requiring the user to gaze down to accomplish near-vision tasks and in some cases causing dizziness and discomfort. Here, we report on previously undescribed switchable, flat, liquid-crystal diffractive lenses that can adaptively change their focusing power. The operation of these spectacle lenses is based on electrical control of the refractive index of a 5-μm-thick layer of nematic liquid crystal using a circular array of photolithographically defined transparent electrodes. It operates with high transmission, low voltage (<2 Vrms), fast response (<1 sec), diffraction efficiency > 90%, small aberrations, and a power-failure-safe configuration. These results represent significant advance in state-of-the-art liquid-crystal diffractive lenses for vision care and other applications. They have the potential of revolutionizing the field of presbyopia correction when combined with automatic adjustable focusing power. PMID:16597675
The evolution of the complex sensory and motor systems of the human brain.
Kaas, Jon H
2008-03-18
Inferences about how the complex sensory and motor systems of the human brain evolved are based on the results of comparative studies of brain organization across a range of mammalian species, and evidence from the endocasts of fossil skulls of key extinct species. The endocasts of the skulls of early mammals indicate that they had small brains with little neocortex. Evidence from comparative studies of cortical organization from small-brained mammals of the six major branches of mammalian evolution supports the conclusion that the small neocortex of early mammals was divided into roughly 20-25 cortical areas, including primary and secondary sensory fields. In early primates, vision was the dominant sense, and cortical areas associated with vision in temporal and occipital cortex underwent a significant expansion. Comparative studies indicate that early primates had 10 or more visual areas, and somatosensory areas with expanded representations of the forepaw. Posterior parietal cortex was also expanded, with a caudal half dominated by visual inputs, and a rostral half dominated by somatosensory inputs with outputs to an array of seven or more motor and visuomotor areas of the frontal lobe. Somatosensory areas and posterior parietal cortex became further differentiated in early anthropoid primates. As larger brains evolved in early apes and in our hominin ancestors, the number of cortical areas increased to reach an estimated 200 or so in present day humans, and hemispheric specializations emerged. The large human brain grew primarily by increasing neuron number rather than increasing average neuron size.
A Simplified Method of Identifying the Trained Retinal Locus for Training in Eccentric Viewing
ERIC Educational Resources Information Center
Vukicevic, Meri; Le, Anh; Baglin, James
2012-01-01
In the typical human visual system, the macula allows for high visual resolution. Damage to this area from diseases, such as age-related macular degeneration (AMD), causes the loss of central vision in the form of a central scotoma. Since no treatment is available to reverse AMD, providing low vision rehabilitation to compensate for the loss of…
Managing the human resource: The challenge of change (notes from presentation)
Cary Latham
1999-01-01
How do we, as leaders, get people to embrace change rather than resist it. Getting people to embrace change is what psychologists call having a "superordinate goal." In plain English, a superordinate goal is an attainable vision. When psychologists talk about vision, they mean something that is not more than one to three sentences. It is not a silly mission...
Enhancing Perceptibility of Barely Perceptible Targets
1982-12-28
faster to low spatial frequencies (Breitmeyer, 1975; Tolhurst, 1975; Vassilev & Mitov , 1976;.Breitmeyer & Ganz, 1975; Watson & Nachimas, * 1977). If ground...response than those tuned to high spatial frequencies: they appear to have a shorter latency (Breitmeyer, 1975; Vassilev & Mitov , 1975; Lupp, Bauske, & Wolf...1958. Tolhurst, D. J. Sustained and transient channels in human vision. Vision Research, 1975, 15, 1151-1155. Vassilev, A., & Mitov , D. Perception
Computing Visible-Surface Representations,
1985-03-01
Terzopoulos N00014-75-C-0643 9. PERFORMING ORGANIZATION NAME AMC ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Artificial Inteligence Laboratory AREA A...Massachusetts Institute of lechnolog,. Support lbr the laboratory’s Artificial Intelligence research is provided in part by the Advanced Rtccarcl Proj...dynamically maintaining visible surface representations. Whether the intention is to model human vision or to design competent artificial vision systems
"As if Reviewing His Life": Bull Lodge's Narrative and the Mediation of Self-Representation
ERIC Educational Resources Information Center
Gone, Joseph P.
2006-01-01
In 1980, on behalf of the Gros Ventre people, George P. Horse Capture published "The Seven Visions of Bull Lodge, as Told by His Daughter, Garter Snake." "The Seven Visions" describes a lifetime of personal encounters with Powerful other-than-human Persons by the noted Gros Ventre warrior and ritual leader, Bull Lodge (ca.…
ERIC Educational Resources Information Center
Palit, Sukanchan
2016-01-01
Scientific vision and scientific understanding in today's world are in the path of new glory. Chemical Engineering science is witnessing drastic and rapid changes. The metamorphosis of human civilization in this century is faced with vicious challenges. Progress of Chemical Engineering science, the vision of technology and the broad chemical…
Liu, Yu-Ting; Pal, Nikhil R; Marathe, Amar R; Wang, Yu-Kai; Lin, Chin-Teng
2017-01-01
A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems.
Liu, Yu-Ting; Pal, Nikhil R.; Marathe, Amar R.; Wang, Yu-Kai; Lin, Chin-Teng
2017-01-01
A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems. PMID:28676734
Whited, John D; Datta, Santanu K; Aiello, Lloyd M; Aiello, Lloyd P; Cavallerano, Jerry D; Conlin, Paul R; Horton, Mark B; Vigersky, Robert A; Poropatich, Ronald K; Challa, Pratap; Darkins, Adam W; Bursell, Sven-Erik
2005-12-01
The objective of this study was to compare, using a 12-month time frame, the cost-effectiveness of a non-mydriatic digital tele-ophthalmology system (Joslin Vision Network) versus traditional clinic-based ophthalmoscopy examinations with pupil dilation to detect proliferative diabetic retinopathy and its consequences. Decision analysis techniques, including Monte Carlo simulation, were used to model the use of the Joslin Vision Network versus conventional clinic-based ophthalmoscopy among the entire diabetic populations served by the Indian Health Service, the Department of Veterans Affairs, and the active duty Department of Defense. The economic perspective analyzed was that of each federal agency. Data sources for costs and outcomes included the published literature, epidemiologic data, administrative data, market prices, and expert opinion. Outcome measures included the number of true positive cases of proliferative diabetic retinopathy detected, the number of patients treated with panretinal laser photocoagulation, and the number of cases of severe vision loss averted. In the base-case analyses, the Joslin Vision Network was the dominant strategy in all but two of the nine modeled scenarios, meaning that it was both less costly and more effective. In the active duty Department of Defense population, the Joslin Vision Network would be more effective but cost an extra 1,618 dollars per additional patient treated with panretinal laser photo-coagulation and an additional 13,748 dollars per severe vision loss event averted. Based on our economic model, the Joslin Vision Network has the potential to be more effective than clinic-based ophthalmoscopy for detecting proliferative diabetic retinopathy and averting cases of severe vision loss, and may do so at lower cost.
Objective evaluation of the visual acuity in human eyes
NASA Astrophysics Data System (ADS)
Rosales, M. A.; López-Olazagasti, E.; Ramírez-Zavaleta, G.; Varillas, G.; Tepichín, E.
2009-08-01
Traditionally, the quality of the human vision is evaluated by a subjective test in which the examiner asks the patient to read a series of characters of different sizes, located at a certain distance of the patient. Typically, we need to ensure a subtended angle of vision of 5 minutes, which implies an object of 8.8 mm high located at 6 meters (normal or 20/20 visual acuity). These characters constitute what is known as the Snellen chart, universally used to evaluate the spatial resolution of the human eyes. The mentioned process of identification of characters is carried out by means of the eye - brain system, giving an evaluation of the subjective visual performance. In this work we consider the eye as an isolated image-forming system, and show that it is possible to isolate the function of the eye from that of the brain in this process. By knowing the impulse response of the eye´s system we can obtain, in advance, the image of the Snellen chart simultaneously. From this information, we obtain the objective performance of the eye as the optical system under test. This type of results might help to detect anomalous situations of the human vision, like the so called "cerebral myopia".
Superior underwater vision in a human population of sea gypsies.
Gislén, Anna; Dacke, Marie; Kröger, Ronald H H; Abrahamsson, Maths; Nilsson, Dan-Eric; Warrant, Eric J
2003-05-13
Humans are poorly adapted for underwater vision. In air, the curved corneal surface accounts for two-thirds of the eye's refractive power, and this is lost when air is replaced by water. Despite this, some tribes of sea gypsies in Southeast Asia live off the sea, and the children collect food from the sea floor without the use of visual aids. This is a remarkable feat when one considers that the human eye is not focused underwater and small objects should remain unresolved. We have measured the visual acuity of children in a sea gypsy population, the Moken, and found that the children see much better underwater than one might expect. Their underwater acuity (6.06 cycles/degree) is more than twice as good as that of European children (2.95 cycles/degree). Our investigations show that the Moken children achieve their superior underwater vision by maximally constricting the pupil (1.96 mm compared to 2.50 mm in European children) and by accommodating to the known limit of human performance (15-16 D). This extreme reaction-which is routine in Moken children-is completely absent in European children. Because they are completely dependent on the sea, the Moken are very likely to derive great benefit from this strategy.
Optimizing the temporal dynamics of light to human perception.
Rieiro, Hector; Martinez-Conde, Susana; Danielson, Andrew P; Pardo-Vazquez, Jose L; Srivastava, Nishit; Macknik, Stephen L
2012-11-27
No previous research has tuned the temporal characteristics of light-emitting devices to enhance brightness perception in human vision, despite the potential for significant power savings. The role of stimulus duration on perceived contrast is unclear, due to contradiction between the models proposed by Bloch and by Broca and Sulzer over 100 years ago. We propose that the discrepancy is accounted for by the observer's "inherent expertise bias," a type of experimental bias in which the observer's life-long experience with interpreting the sensory world overcomes perceptual ambiguities and biases experimental outcomes. By controlling for this and all other known biases, we show that perceived contrast peaks at durations of 50-100 ms, and we conclude that the Broca-Sulzer effect best describes human temporal vision. We also show that the plateau in perceived brightness with stimulus duration, described by Bloch's law, is a previously uncharacterized type of temporal brightness constancy that, like classical constancy effects, serves to enhance object recognition across varied lighting conditions in natural vision-although this is a constancy effect that normalizes perception across temporal modulation conditions. A practical outcome of this study is that tuning light-emitting devices to match the temporal dynamics of the human visual system's temporal response function will result in significant power savings.
Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan
2017-06-06
Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable.
Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan
2017-01-01
Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable. PMID:28587275
Vision-based body tracking: turning Kinect into a clinical tool.
Morrison, Cecily; Culmer, Peter; Mentis, Helena; Pincus, Tamar
2016-08-01
Vision-based body tracking technologies, originally developed for the consumer gaming market, are being repurposed to form the core of a range of innovative healthcare applications in the clinical assessment and rehabilitation of movement ability. Vision-based body tracking has substantial potential, but there are technical limitations. We use our "stories from the field" to articulate the challenges and offer examples of how these can be overcome. We illustrate that: (i) substantial effort is needed to determine the measures and feedback vision-based body tracking should provide, accounting for the practicalities of the technology (e.g. range) as well as new environments (e.g. home). (ii) Practical considerations are important when planning data capture so that data is analysable, whether finding ways to support a patient or ensuring everyone does the exercise in the same manner. (iii) Home is a place of opportunity for vision-based body tracking, but what we do now in the clinic (e.g. balance tests) or in the home (e.g. play games) will require modifications to achieve capturable, clinically relevant measures. This article articulates how vision-based body tracking works and when it does not to continue to inspire our clinical colleagues to imagine new applications. Implications for Rehabilitation Vision-based body tracking has quickly been repurposed to form the core of innovative healthcare applications in clinical assessment and rehabilitation, but there are clinical as well as practical challenges to make such systems a reality. Substantial effort needs to go into determining what types of measures and feedback vision-based body tracking should provide. This needs to account for the practicalities of the technology (e.g. range) as well as the opportunities of new environments (e.g. the home). Practical considerations need to be accounted for when planning capture in a particular environment so that data is analysable, whether it be finding a chair substitute, ways to support a patient or ensuring everyone does the exercise in the same manner. The home is a place of opportunity with vision-based body tracking, but it would be naïve to think that we can do what we do now in the clinic (e.g. balance tests) or in the home (e.g. play games), without appropriate modifications to what constitutes a practically capturable, clinically relevant measure.
A Face Attention Technique for a Robot Able to Interpret Facial Expressions
NASA Astrophysics Data System (ADS)
Simplício, Carlos; Prado, José; Dias, Jorge
Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.
Colour vision experimental studies in teaching of optometry
NASA Astrophysics Data System (ADS)
Ozolinsh, Maris; Ikaunieks, Gatis; Fomins, Sergejs
2005-10-01
Following aspects related to human colour vision are included in experimental lessons for optometry students of University of Latvia. Characteristics of coloured stimuli (emitting and reflective), determination their coordinates in different colour spaces. Objective characteristics of transmitting of colour stimuli through the optical system of eye together with various types of appliances (lenses, prisms, Fresnel prisms). Psychophysical determination of mono- and polychromatic stimuli perception taking into account physiology of eye, retinal colour photoreceptor topography and spectral sensitivity, spatial and temporal characteristics of retinal receptive fields. Ergonomics of visual perception, influence of illumination and glare effects, testing of colour vision deficiencies.
Ma, Jun; Lewis, Megan A; Smyth, Joshua M
2018-04-12
In this commentary, we propose a vision for "practice-based translational behavior change research," which we define as clinical and public health practice-embedded research on the implementation, optimization, and fundamental mechanisms of behavioral interventions. This vision intends to be inclusive of important research elements for behavioral intervention development, testing, and implementation. We discuss important research gaps and conceptual and methodological advances in three key areas along the discovery (development) to delivery (implementation) continuum of evidence-based interventions to improve behavior and health that could help achieve our vision of practice-based translational behavior change research. We expect our proposed vision to be refined and evolve over time. Through highlighting critical gaps that can be addressed by integrating modern theoretical and methodological approaches across disciplines in behavioral medicine, we hope to inspire the development and funding of innovative research on more potent and implementable behavior change interventions for optimal population and individual health.
Modeling the convergence accommodation of stereo vision for binocular endoscopy.
Gao, Yuanqian; Li, Jinhua; Li, Jianmin; Wang, Shuxin
2018-02-01
The stereo laparoscope is an important tool for achieving depth perception in robot-assisted minimally invasive surgery (MIS). A dynamic convergence accommodation algorithm is proposed to improve the viewing experience and achieve accurate depth perception. Based on the principle of the human vision system, a positional kinematic model of the binocular view system is established. The imaging plane pair is rectified to ensure that the two rectified virtual optical axes intersect at the fixation target to provide immersive depth perception. Stereo disparity was simulated with the roll and pitch movements of the binocular system. The chessboard test and the endoscopic peg transfer task were performed, and the results demonstrated the improved disparity distribution and robustness of the proposed convergence accommodation method with respect to the position of the fixation target. This method offers a new solution for effective depth perception with the stereo laparoscopes used in robot-assisted MIS. Copyright © 2017 John Wiley & Sons, Ltd.
Dual Use of Image Based Tracking Techniques: Laser Eye Surgery and Low Vision Prosthesis
NASA Technical Reports Server (NTRS)
Juday, Richard D.; Barton, R. Shane
1994-01-01
With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.
Robot vision system programmed in Prolog
NASA Astrophysics Data System (ADS)
Batchelor, Bruce G.; Hack, Ralf
1995-10-01
This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)
Crossing the quality chasm in resource-limited settings.
Maru, Duncan Smith-Rohrberg; Andrews, Jason; Schwarz, Dan; Schwarz, Ryan; Acharya, Bibhav; Ramaiya, Astha; Karelas, Gregory; Rajbhandari, Ruma; Mate, Kedar; Shilpakar, Sona
2012-11-30
Over the last decade, extensive scientific and policy innovations have begun to reduce the "quality chasm"--the gulf between best practices and actual implementation that exists in resource-rich medical settings. While limited data exist, this chasm is likely to be equally acute and deadly in resource-limited areas. While health systems have begun to be scaled up in impoverished areas, scale-up is just the foundation necessary to deliver effective healthcare to the poor. This perspective piece describes a vision for a global quality improvement movement in resource-limited areas. The following action items are a first step toward achieving this vision: 1) revise global health investment mechanisms to value quality; 2) enhance human resources for improving health systems quality; 3) scale up data capacity; 4) deepen community accountability and engagement initiatives; 5) implement evidence-based quality improvement programs; 6) develop an implementation science research agenda.
van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W
2010-01-22
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
Revolutionary Propulsion Systems for 21st Century Aviation
NASA Technical Reports Server (NTRS)
Sehra, Arun K.; Shin, Jaiwon
2003-01-01
The air transportation for the new millennium will require revolutionary solutions to meeting public demand for improving safety, reliability, environmental compatibility, and affordability. NASA's vision for 21st Century Aircraft is to develop propulsion systems that are intelligent, virtually inaudible (outside the airport boundaries), and have near zero harmful emissions (CO2 and Knox). This vision includes intelligent engines that will be capable of adapting to changing internal and external conditions to optimally accomplish the mission with minimal human intervention. The distributed vectored propulsion will replace two to four wing mounted or fuselage mounted engines by a large number of small, mini, or micro engines, and the electric drive propulsion based on fuel cell power will generate electric power, which in turn will drive propulsors to produce the desired thrust. Such a system will completely eliminate the harmful emissions. This paper reviews future propulsion and power concepts that are currently under development at NASA Glenn Research Center.
Autonomic Computing: Panacea or Poppycock?
NASA Technical Reports Server (NTRS)
Sterritt, Roy; Hinchey, Mike
2005-01-01
Autonomic Computing arose out of a need for a means to cope with rapidly growing complexity of integrating, managing, and operating computer-based systems as well as a need to reduce the total cost of ownership of today's systems. Autonomic Computing (AC) as a discipline was proposed by IBM in 2001, with the vision to develop self-managing systems. As the name implies, the influence for the new paradigm is the human body's autonomic system, which regulates vital bodily functions such as the control of heart rate, the body's temperature and blood flow-all without conscious effort. The vision is to create selfivare through self-* properties. The initial set of properties, in terms of objectives, were self-configuring, self-healing, self-optimizing and self-protecting, along with attributes of self-awareness, self-monitoring and self-adjusting. This self-* list has grown: self-anticipating, self-critical, self-defining, self-destructing, self-diagnosis, self-governing, self-organized, self-reflecting, and self-simulation, for instance.
Dual use of image based tracking techniques: Laser eye surgery and low vision prosthesis
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1994-01-01
With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.
Grounding Robot Autonomy in Emotion and Self-awareness
NASA Astrophysics Data System (ADS)
Sanz, Ricardo; Hernández, Carlos; Hernando, Adolfo; Gómez, Jaime; Bermejo, Julita
Much is being done in an attempt to transfer emotional mechanisms from reverse-engineered biology into social robots. There are two basic approaches: the imitative display of emotion —e.g. to intend more human-like robots— and the provision of architectures with intrinsic emotion —in the hope of enhancing behavioral aspects. This paper focuses on the second approach, describing a core vision regarding the integration of cognitive, emotional and autonomic aspects in social robot systems. This vision has evolved as a result of the efforts in consolidating the models extracted from rat emotion research and their implementation in technical use cases based on a general systemic analysis in the framework of the ICEA and C3 projects. The desire for generality of the approach intends obtaining universal theories of integrated —autonomic, emotional, cognitive— behavior. The proposed conceptualizations and architectural principles are then captured in a theoretical framework: ASys — The Autonomous Systems Framework.
Martínez-Bueso, Pau; Moyà-Alcover, Biel
2014-01-01
Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (T s) and time-to-complete (T c)). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (T s = 7.09 (P < 0.001) and T c = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310
Stereo chromatic contrast sensitivity model to blue-yellow gratings.
Yang, Jiachen; Lin, Yancong; Liu, Yun
2016-03-07
As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.
Long-term safety of human retinal progenitor cell transplantation in retinitis pigmentosa patients.
Liu, Yong; Chen, Shao Jun; Li, Shi Ying; Qu, Ling Hui; Meng, Xiao Hong; Wang, Yi; Xu, Hai Wei; Liang, Zhi Qing; Yin, Zheng Qin
2017-09-29
Retinitis pigmentosa is a common genetic disease that causes retinal degeneration and blindness for which there is currently no curable treatment available. Vision preservation was observed in retinitis pigmentosa animal models after retinal stem cell transplantation. However, long-term safety studies and visual assessment have not been thoroughly tested in retinitis pigmentosa patients. In our pre-clinical study, purified human fetal-derived retinal progenitor cells (RPCs) were transplanted into the diseased retina of Royal College of Surgeons (RCS) rats, a model of retinal degeneration. Based on these results, we conducted a phase I clinical trial to establish the safety and tolerability of transplantation of RPCs in eight patients with advanced retinitis pigmentosa. Patients were studied for 24 months. After RPC transplantation in RCS rats, we observed moderate recovery of vision and maintenance of the outer nuclear layer thickness. Most importantly, we did not find tumor formation or immune rejection. In the retinis pigmentosa patients given RPC injections, we also did not observe immunological rejection or tumorigenesis when immunosuppressive agents were not administered. We observed a significant improvement in visual acuity (P < 0.05) in five patients and an increase in retinal sensitivity of pupillary responses in three of the eight patients between 2 and 6 months after the transplant, but this improvement did not appear by 12 months. Our study for the first time confirmed the long-term safety and feasibility of vision repair by stem cell therapy in patients blinded by retinitis pigmentosa. WHO Trial Registration, ChiCTR-TNRC-08000193 . Retrospectively registered on 5 December 2008.
Mogol, Burçe Ataç; Gökmen, Vural
2014-05-01
Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.
Simulation of thalamic prosthetic vision: reading accuracy, speed, and acuity in sighted humans.
Vurro, Milena; Crowell, Anne Marie; Pezaris, John S
2014-01-01
The psychophysics of reading with artificial sight has received increasing attention as visual prostheses are becoming a real possibility to restore useful function to the blind through the coarse, pseudo-pixelized vision they generate. Studies to date have focused on simulating retinal and cortical prostheses; here we extend that work to report on thalamic designs. This study examined the reading performance of normally sighted human subjects using a simulation of three thalamic visual prostheses that varied in phosphene count, to help understand the level of functional ability afforded by thalamic designs in a task of daily living. Reading accuracy, reading speed, and reading acuity of 20 subjects were measured as a function of letter size, using a task based on the MNREAD chart. Results showed that fluid reading was feasible with appropriate combinations of letter size and phosphene count, and performance degraded smoothly as font size was decreased, with an approximate doubling of phosphene count resulting in an increase of 0.2 logMAR in acuity. Results here were consistent with previous results from our laboratory. Results were also consistent with those from the literature, despite using naive subjects who were not trained on the simulator, in contrast to other reports.
Navigation of military and space unmanned ground vehicles in unstructured terrains
NASA Technical Reports Server (NTRS)
Lescoe, Paul; Lavery, David; Bedard, Roger
1991-01-01
Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
Belyanskaya, Svetlana L; Ding, Yun; Callahan, James F; Lazaar, Aili L; Israel, David I
2017-05-04
DNA-encoded chemical library technology was developed with the vision of its becoming a transformational platform for drug discovery. The hope was that a new paradigm for the discovery of low-molecular-weight drugs would be enabled by combining the vast molecular diversity achievable with combinatorial chemistry, the information-encoding attributes of DNA, the power of molecular biology, and a streamlined selection-based discovery process. Here, we describe the discovery and early clinical development of GSK2256294, an inhibitor of soluble epoxide hydrolase (sEH, EPHX2), by using encoded-library technology (ELT). GSK2256294 is an orally bioavailable, potent and selective inhibitor of sEH that has a long half life and produced no serious adverse events in a first-time-in-human clinical study. To our knowledge, GSK2256294 is the first molecule discovered from this technology to enter human clinical testing and represents a realization of the vision that DNA-encoded chemical library technology can efficiently yield molecules with favorable properties that can be readily progressed into high-quality drugs. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Residual perception of biological motion in cortical blindness.
Ruffieux, Nicolas; Ramon, Meike; Lao, Junpeng; Colombo, Françoise; Stacchi, Lisa; Borruat, François-Xavier; Accolla, Ettore; Annoni, Jean-Marie; Caldara, Roberto
2016-12-01
From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gootwine, Elisha; Abu-Siam, Mazen; Obolensky, Alexey; Rosov, Alex; Honig, Hen; Nitzan, Tali; Shirak, Andrey; Ezra-Elia, Raaya; Yamin, Esther; Banin, Eyal; Averbukh, Edward; Hauswirth, William W; Ofri, Ron; Seroussi, Eyal
2017-03-01
Applying CNGA3 gene augmentation therapy to cure a novel causative mutation underlying achromatopsia (ACHM) in sheep. Impaired vision that spontaneously appeared in newborn lambs was characterized by behavioral, electroretinographic (ERG), and histologic techniques. Deep-sequencing reads of an affected lamb and an unaffected lamb were compared within conserved genomic regions orthologous to human genes involved in similar visual impairment. Observed nonsynonymous amino acid substitutions were classified by their deleteriousness score. The putative causative mutation was assessed by producing compound CNGA3 heterozygotes and applying gene augmentation therapy using the orthologous human cDNA. Behavioral assessment revealed day blindness, and subsequent ERG examination showed attenuated photopic responses. Histologic and immunohistochemical examination of affected sheep eyes did not reveal degeneration, and cone photoreceptors expressing CNGA3 were present. Bioinformatics and sequencing analyses suggested a c.1618G>A, p.Gly540Ser substitution in the GMP-binding domain of CNGA3 as the causative mutation. This was confirmed by genetic concordance test and by genetic complementation experiment: All five compound CNGA3 heterozygotes, carrying both p.Arg236* and p.Gly540Ser mutations in CNGA3, were day-blind. Furthermore, subretinal delivery of the intact human CNGA3 gene using an adeno-associated viral vector (AAV) restored photopic vision in two affected p.Gly540Ser homozygous rams. The c.1618G>A, p.Gly540Ser substitution in CNGA3 was identified as the causative mutation for a novel form of ACHM in Awassi sheep. Gene augmentation therapy restored vision in the affected sheep. This novel mutation provides a large-animal model that is valid for most human CNGA3 ACHM patients; the majority of them carry missense rather than premature-termination mutations.
Robotic Attention Processing And Its Application To Visual Guidance
NASA Astrophysics Data System (ADS)
Barth, Matthew; Inoue, Hirochika
1988-03-01
This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.
Rooney, Kevin K.; Condia, Robert J.; Loschky, Lester C.
2017-01-01
Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one’s fist at arm’s length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words) PMID:28360867
Rooney, Kevin K; Condia, Robert J; Loschky, Lester C
2017-01-01
Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one's fist at arm's length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words).
Kalt, Wilhelmina; McDonald, Jane E; Fillmore, Sherry A E; Tremblay, Francois
2014-11-19
Clinical evidence for anthocyanin benefits in night vision is controversial. This paper presents two human trials investigating blueberry anthocyanin effects on dark adaptation, functional night vision, and vision recovery after retinal photobleaching. One trial, S2 (n = 72), employed a 3 week intervention and a 3 week washout, two anthocyanin doses (271 and 7.11 mg cyanidin 3-glucoside equivalents (C3g eq)), and placebo. The other trial, L1 (n = 59), employed a 12 week intervention and an 8 week washout and tested one dose (346 mg C3g eq) and placebo. In both S2 and L1 neither dark adaptation nor night vision was improved by anthocyanin intake. However, in both trials anthocyanin consumption hastened the recovery of visual acuity after photobleaching. In S2 both anthocyanin doses were effective (P = 0.014), and in L1 recovery was improved at 8 weeks (P = 0.027) and 12 weeks (P = 0.030). Although photobleaching recovery was hastened by anthocyanins, it is not known whether this improvement would have an impact on everyday vision.
Betti, Viviana; Corbetta, Maurizio; de Pasquale, Francesco; Wens, Vincent; Della Penna, Stefania
2018-04-11
Networks hubs represent points of convergence for the integration of information across many different nodes and systems. Although a great deal is known on the topology of hub regions in the human brain, little is known about their temporal dynamics. Here, we examine the static and dynamic centrality of hub regions when measured in the absence of a task (rest) or during the observation of natural or synthetic visual stimuli. We used Magnetoencephalography (MEG) in humans (both sexes) to measure static and transient regional and network-level interaction in α- and β-band limited power (BLP) in three conditions: visual fixation (rest), viewing of movie clips (natural vision), and time-scrambled versions of the same clips (scrambled vision). Compared with rest, we observed in both movie conditions a robust decrement of α-BLP connectivity. Moreover, both movie conditions caused a significant reorganization of connections in the α band, especially between networks. In contrast, β-BLP connectivity was remarkably similar between rest and natural vision. Not only the topology did not change, but the joint dynamics of hubs in a core network during natural vision was predicted by similar fluctuations in the resting state. We interpret these findings by suggesting that slow-varying fluctuations of integration occurring in higher-order regions in the β band may be a mechanism to anticipate and predict slow-varying temporal patterns of the visual environment. SIGNIFICANCE STATEMENT A fundamental question in neuroscience concerns the function of spontaneous brain connectivity. Here, we tested the hypothesis that topology of intrinsic brain connectivity and its dynamics might predict those observed during natural vision. Using MEG, we tracked the static and time-varying brain functional connectivity when observers were either fixating or watching different movie clips. The spatial distribution of connections and the dynamics of centrality of a set of regions were similar during rest and movie in the β band, but not in the α band. These results support the hypothesis that the intrinsic β-rhythm integration occurs with a similar temporal structure during natural vision, possibly providing advanced information about incoming stimuli. Copyright © 2018 the authors 0270-6474/18/383858-14$15.00/0.